Test Report: KVM_Linux_crio 21830

                    
                      3aa0d58a4eff13dd9d5f058e659508fb4ffd2206:2025-11-01:42156
                    
                

Test fail (14/343)

x
+
TestAddons/parallel/Ingress (491.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-086339 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-086339 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-086339 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [80f28ba1-b1ac-4f7a-9a35-3fd834d8e54e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:337: TestAddons/parallel/Ingress: WARNING: pod list for "default" "run=nginx" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:252: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:252: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-086339 -n addons-086339
addons_test.go:252: TestAddons/parallel/Ingress: showing logs for failed pods as of 2025-11-01 10:01:11.614230965 +0000 UTC m=+686.356438952
addons_test.go:252: (dbg) Run:  kubectl --context addons-086339 describe po nginx -n default
addons_test.go:252: (dbg) kubectl --context addons-086339 describe po nginx -n default:
Name:             nginx
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-086339/192.168.39.58
Start Time:       Sat, 01 Nov 2025 09:53:11 +0000
Labels:           run=nginx
Annotations:      <none>
Status:           Pending
IP:               10.244.0.29
IPs:
IP:  10.244.0.29
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sggwf (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-sggwf:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  8m                    default-scheduler  Successfully assigned default/nginx to addons-086339
Warning  Failed     5m4s                  kubelet            Failed to pull image "docker.io/nginx:alpine": fetching target platform image selected from image index: reading manifest sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     100s (x3 over 6m53s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     100s (x4 over 6m53s)  kubelet            Error: ErrImagePull
Normal   BackOff    29s (x11 over 6m52s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     29s (x11 over 6m52s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    15s (x5 over 8m)      kubelet            Pulling image "docker.io/nginx:alpine"
addons_test.go:252: (dbg) Run:  kubectl --context addons-086339 logs nginx -n default
addons_test.go:252: (dbg) Non-zero exit: kubectl --context addons-086339 logs nginx -n default: exit status 1 (72.359798ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:252: kubectl --context addons-086339 logs nginx -n default: exit status 1
addons_test.go:253: failed waiting for nginx pod: run=nginx within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-086339 -n addons-086339
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-086339 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-086339 logs -n 25: (1.320665297s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ minikube             │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ delete  │ -p download-only-036288                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-036288 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ delete  │ -p download-only-319914                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-319914 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ delete  │ -p download-only-036288                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-036288 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ start   │ --download-only -p binary-mirror-623089 --alsologtostderr --binary-mirror http://127.0.0.1:33603 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-623089 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │                     │
	│ delete  │ -p binary-mirror-623089                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-623089 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ addons  │ enable dashboard -p addons-086339                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │                     │
	│ addons  │ disable dashboard -p addons-086339                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │                     │
	│ start   │ -p addons-086339 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:52 UTC │
	│ addons  │ addons-086339 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:52 UTC │ 01 Nov 25 09:52 UTC │
	│ addons  │ addons-086339 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:52 UTC │ 01 Nov 25 09:52 UTC │
	│ addons  │ enable headlamp -p addons-086339 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:52 UTC │ 01 Nov 25 09:52 UTC │
	│ addons  │ addons-086339 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:52 UTC │ 01 Nov 25 09:52 UTC │
	│ addons  │ addons-086339 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:52 UTC │ 01 Nov 25 09:53 UTC │
	│ addons  │ addons-086339 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:53 UTC │ 01 Nov 25 09:53 UTC │
	│ addons  │ addons-086339 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:53 UTC │ 01 Nov 25 09:53 UTC │
	│ ip      │ addons-086339 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:53 UTC │ 01 Nov 25 09:53 UTC │
	│ addons  │ addons-086339 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:53 UTC │ 01 Nov 25 09:53 UTC │
	│ addons  │ addons-086339 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:53 UTC │ 01 Nov 25 09:53 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-086339                                                                                                                                                                                                                                                                                                                                                                                         │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:53 UTC │ 01 Nov 25 09:53 UTC │
	│ addons  │ addons-086339 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:53 UTC │ 01 Nov 25 09:53 UTC │
	│ addons  │ addons-086339 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ addons  │ addons-086339 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:55 UTC │ 01 Nov 25 09:56 UTC │
	│ addons  │ addons-086339 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:59 UTC │ 01 Nov 25 09:59 UTC │
	│ addons  │ addons-086339 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:59 UTC │ 01 Nov 25 09:59 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:49:57
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:49:57.488461   74584 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:49:57.488721   74584 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:49:57.488731   74584 out.go:374] Setting ErrFile to fd 2...
	I1101 09:49:57.488735   74584 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:49:57.488932   74584 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-70113/.minikube/bin
	I1101 09:49:57.489456   74584 out.go:368] Setting JSON to false
	I1101 09:49:57.490315   74584 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":5545,"bootTime":1761985052,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:49:57.490405   74584 start.go:143] virtualization: kvm guest
	I1101 09:49:57.492349   74584 out.go:179] * [addons-086339] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:49:57.493732   74584 notify.go:221] Checking for updates...
	I1101 09:49:57.493769   74584 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 09:49:57.495124   74584 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:49:57.496430   74584 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-70113/kubeconfig
	I1101 09:49:57.497763   74584 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-70113/.minikube
	I1101 09:49:57.499098   74584 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:49:57.500291   74584 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:49:57.501672   74584 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:49:57.530798   74584 out.go:179] * Using the kvm2 driver based on user configuration
	I1101 09:49:57.531916   74584 start.go:309] selected driver: kvm2
	I1101 09:49:57.531929   74584 start.go:930] validating driver "kvm2" against <nil>
	I1101 09:49:57.531940   74584 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:49:57.532704   74584 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 09:49:57.532950   74584 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:49:57.532995   74584 cni.go:84] Creating CNI manager for ""
	I1101 09:49:57.533055   74584 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 09:49:57.533066   74584 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1101 09:49:57.533123   74584 start.go:353] cluster config:
	{Name:addons-086339 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-086339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1101 09:49:57.533236   74584 iso.go:125] acquiring lock: {Name:mk49d9a272bb99d336f82dfc5631a4c8ce9271c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:49:57.534643   74584 out.go:179] * Starting "addons-086339" primary control-plane node in "addons-086339" cluster
	I1101 09:49:57.535623   74584 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:49:57.535667   74584 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-70113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 09:49:57.535680   74584 cache.go:59] Caching tarball of preloaded images
	I1101 09:49:57.535759   74584 preload.go:233] Found /home/jenkins/minikube-integration/21830-70113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 09:49:57.535771   74584 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:49:57.536122   74584 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/config.json ...
	I1101 09:49:57.536151   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/config.json: {Name:mka52b297897069cd677da03eb710fe0f89e4afc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:49:57.536283   74584 start.go:360] acquireMachinesLock for addons-086339: {Name:mk53a05d125fe91ead2a39c6bbf2ba926c471e2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 09:49:57.536359   74584 start.go:364] duration metric: took 60.989µs to acquireMachinesLock for "addons-086339"
	I1101 09:49:57.536383   74584 start.go:93] Provisioning new machine with config: &{Name:addons-086339 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-086339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:49:57.536443   74584 start.go:125] createHost starting for "" (driver="kvm2")
	I1101 09:49:57.537962   74584 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1101 09:49:57.538116   74584 start.go:159] libmachine.API.Create for "addons-086339" (driver="kvm2")
	I1101 09:49:57.538147   74584 client.go:173] LocalClient.Create starting
	I1101 09:49:57.538241   74584 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem
	I1101 09:49:57.899320   74584 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/cert.pem
	I1101 09:49:58.572079   74584 main.go:143] libmachine: creating domain...
	I1101 09:49:58.572106   74584 main.go:143] libmachine: creating network...
	I1101 09:49:58.573844   74584 main.go:143] libmachine: found existing default network
	I1101 09:49:58.574184   74584 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 09:49:58.574920   74584 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c7bfb0}
	I1101 09:49:58.575053   74584 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-086339</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 09:49:58.580872   74584 main.go:143] libmachine: creating private network mk-addons-086339 192.168.39.0/24...
	I1101 09:49:58.651337   74584 main.go:143] libmachine: private network mk-addons-086339 192.168.39.0/24 created
	I1101 09:49:58.651625   74584 main.go:143] libmachine: <network>
	  <name>mk-addons-086339</name>
	  <uuid>3e8e4cbf-1e3f-4b76-b08f-c763f9bae7dc</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:4f:55:bf'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 09:49:58.651651   74584 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339 ...
	I1101 09:49:58.651674   74584 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21830-70113/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso
	I1101 09:49:58.651685   74584 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21830-70113/.minikube
	I1101 09:49:58.651769   74584 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21830-70113/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21830-70113/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso...
	I1101 09:49:58.889523   74584 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa...
	I1101 09:49:59.320606   74584 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/addons-086339.rawdisk...
	I1101 09:49:59.320670   74584 main.go:143] libmachine: Writing magic tar header
	I1101 09:49:59.320695   74584 main.go:143] libmachine: Writing SSH key tar header
	I1101 09:49:59.320769   74584 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339 ...
	I1101 09:49:59.320832   74584 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339
	I1101 09:49:59.320855   74584 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339 (perms=drwx------)
	I1101 09:49:59.320865   74584 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21830-70113/.minikube/machines
	I1101 09:49:59.320880   74584 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21830-70113/.minikube/machines (perms=drwxr-xr-x)
	I1101 09:49:59.320892   74584 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21830-70113/.minikube
	I1101 09:49:59.320902   74584 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21830-70113/.minikube (perms=drwxr-xr-x)
	I1101 09:49:59.320910   74584 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21830-70113
	I1101 09:49:59.320919   74584 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21830-70113 (perms=drwxrwxr-x)
	I1101 09:49:59.320926   74584 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1101 09:49:59.320936   74584 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1101 09:49:59.320946   74584 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1101 09:49:59.320953   74584 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1101 09:49:59.320964   74584 main.go:143] libmachine: checking permissions on dir: /home
	I1101 09:49:59.320971   74584 main.go:143] libmachine: skipping /home - not owner
	I1101 09:49:59.320977   74584 main.go:143] libmachine: defining domain...
	I1101 09:49:59.322386   74584 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-086339</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/addons-086339.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-086339'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1101 09:49:59.327390   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:41:14:53 in network default
	I1101 09:49:59.328042   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:49:59.328057   74584 main.go:143] libmachine: starting domain...
	I1101 09:49:59.328062   74584 main.go:143] libmachine: ensuring networks are active...
	I1101 09:49:59.328857   74584 main.go:143] libmachine: Ensuring network default is active
	I1101 09:49:59.329422   74584 main.go:143] libmachine: Ensuring network mk-addons-086339 is active
	I1101 09:49:59.330127   74584 main.go:143] libmachine: getting domain XML...
	I1101 09:49:59.331370   74584 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-086339</name>
	  <uuid>a0be334a-213a-4e9a-bad3-6168cb6c4d93</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/addons-086339.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:b9:a4:85'/>
	      <source network='mk-addons-086339'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:41:14:53'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1101 09:50:00.609088   74584 main.go:143] libmachine: waiting for domain to start...
	I1101 09:50:00.610434   74584 main.go:143] libmachine: domain is now running
	I1101 09:50:00.610456   74584 main.go:143] libmachine: waiting for IP...
	I1101 09:50:00.611312   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:00.612106   74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
	I1101 09:50:00.612125   74584 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:50:00.612466   74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
	I1101 09:50:00.612543   74584 retry.go:31] will retry after 238.184391ms: waiting for domain to come up
	I1101 09:50:00.851957   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:00.852980   74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
	I1101 09:50:00.852999   74584 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:50:00.853378   74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
	I1101 09:50:00.853417   74584 retry.go:31] will retry after 315.459021ms: waiting for domain to come up
	I1101 09:50:01.170821   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:01.171618   74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
	I1101 09:50:01.171637   74584 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:50:01.172000   74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
	I1101 09:50:01.172045   74584 retry.go:31] will retry after 375.800667ms: waiting for domain to come up
	I1101 09:50:01.549768   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:01.550551   74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
	I1101 09:50:01.550568   74584 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:50:01.550912   74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
	I1101 09:50:01.550947   74584 retry.go:31] will retry after 436.650242ms: waiting for domain to come up
	I1101 09:50:01.989558   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:01.990329   74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
	I1101 09:50:01.990346   74584 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:50:01.990674   74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
	I1101 09:50:01.990717   74584 retry.go:31] will retry after 579.834412ms: waiting for domain to come up
	I1101 09:50:02.572692   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:02.573467   74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
	I1101 09:50:02.573488   74584 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:50:02.573815   74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
	I1101 09:50:02.573865   74584 retry.go:31] will retry after 839.063755ms: waiting for domain to come up
	I1101 09:50:03.414428   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:03.415319   74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
	I1101 09:50:03.415342   74584 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:50:03.415659   74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
	I1101 09:50:03.415702   74584 retry.go:31] will retry after 768.970672ms: waiting for domain to come up
	I1101 09:50:04.186700   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:04.187419   74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
	I1101 09:50:04.187437   74584 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:50:04.187709   74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
	I1101 09:50:04.187746   74584 retry.go:31] will retry after 1.192575866s: waiting for domain to come up
	I1101 09:50:05.382202   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:05.382884   74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
	I1101 09:50:05.382907   74584 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:50:05.383270   74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
	I1101 09:50:05.383321   74584 retry.go:31] will retry after 1.520355221s: waiting for domain to come up
	I1101 09:50:06.906019   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:06.906685   74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
	I1101 09:50:06.906702   74584 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:50:06.906966   74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
	I1101 09:50:06.907000   74584 retry.go:31] will retry after 1.452783326s: waiting for domain to come up
	I1101 09:50:08.361823   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:08.362686   74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
	I1101 09:50:08.362711   74584 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:50:08.363062   74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
	I1101 09:50:08.363109   74584 retry.go:31] will retry after 1.991395227s: waiting for domain to come up
	I1101 09:50:10.357523   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:10.358353   74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
	I1101 09:50:10.358372   74584 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:50:10.358693   74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
	I1101 09:50:10.358739   74584 retry.go:31] will retry after 3.532288823s: waiting for domain to come up
	I1101 09:50:13.893052   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:13.893671   74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
	I1101 09:50:13.893684   74584 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:50:13.893975   74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
	I1101 09:50:13.894012   74584 retry.go:31] will retry after 4.252229089s: waiting for domain to come up
	I1101 09:50:18.147616   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.148327   74584 main.go:143] libmachine: domain addons-086339 has current primary IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.148350   74584 main.go:143] libmachine: found domain IP: 192.168.39.58
	I1101 09:50:18.148365   74584 main.go:143] libmachine: reserving static IP address...
	I1101 09:50:18.148791   74584 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-086339", mac: "52:54:00:b9:a4:85", ip: "192.168.39.58"} in network mk-addons-086339
	I1101 09:50:18.327560   74584 main.go:143] libmachine: reserved static IP address 192.168.39.58 for domain addons-086339
	I1101 09:50:18.327599   74584 main.go:143] libmachine: waiting for SSH...
	I1101 09:50:18.327609   74584 main.go:143] libmachine: Getting to WaitForSSH function...
	I1101 09:50:18.330699   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.331371   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:18.331408   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.331641   74584 main.go:143] libmachine: Using SSH client type: native
	I1101 09:50:18.331928   74584 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1101 09:50:18.331942   74584 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1101 09:50:18.444329   74584 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:50:18.444817   74584 main.go:143] libmachine: domain creation complete
	I1101 09:50:18.446547   74584 machine.go:94] provisionDockerMachine start ...
	I1101 09:50:18.449158   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.449586   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:18.449617   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.449805   74584 main.go:143] libmachine: Using SSH client type: native
	I1101 09:50:18.450004   74584 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1101 09:50:18.450014   74584 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:50:18.560574   74584 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1101 09:50:18.560609   74584 buildroot.go:166] provisioning hostname "addons-086339"
	I1101 09:50:18.564015   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.564582   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:18.564616   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.564819   74584 main.go:143] libmachine: Using SSH client type: native
	I1101 09:50:18.565060   74584 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1101 09:50:18.565073   74584 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-086339 && echo "addons-086339" | sudo tee /etc/hostname
	I1101 09:50:18.692294   74584 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-086339
	
	I1101 09:50:18.695361   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.695730   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:18.695754   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.695958   74584 main.go:143] libmachine: Using SSH client type: native
	I1101 09:50:18.696217   74584 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1101 09:50:18.696238   74584 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-086339' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-086339/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-086339' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:50:18.817833   74584 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:50:18.817861   74584 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21830-70113/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-70113/.minikube}
	I1101 09:50:18.817917   74584 buildroot.go:174] setting up certificates
	I1101 09:50:18.817929   74584 provision.go:84] configureAuth start
	I1101 09:50:18.820836   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.821182   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:18.821205   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.823468   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.823880   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:18.823917   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.824065   74584 provision.go:143] copyHostCerts
	I1101 09:50:18.824126   74584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-70113/.minikube/ca.pem (1082 bytes)
	I1101 09:50:18.824236   74584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-70113/.minikube/cert.pem (1123 bytes)
	I1101 09:50:18.824293   74584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-70113/.minikube/key.pem (1675 bytes)
	I1101 09:50:18.824393   74584 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-70113/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca-key.pem org=jenkins.addons-086339 san=[127.0.0.1 192.168.39.58 addons-086339 localhost minikube]
	I1101 09:50:18.982158   74584 provision.go:177] copyRemoteCerts
	I1101 09:50:18.982222   74584 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:50:18.984649   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.985018   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:18.985044   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.985191   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:19.074666   74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 09:50:19.105450   74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1101 09:50:19.136079   74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:50:19.165744   74584 provision.go:87] duration metric: took 347.798818ms to configureAuth
	I1101 09:50:19.165785   74584 buildroot.go:189] setting minikube options for container-runtime
	I1101 09:50:19.165985   74584 config.go:182] Loaded profile config "addons-086339": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:50:19.168523   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.169168   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:19.169200   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.169383   74584 main.go:143] libmachine: Using SSH client type: native
	I1101 09:50:19.169583   74584 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1101 09:50:19.169597   74584 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:50:19.428804   74584 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:50:19.428828   74584 machine.go:97] duration metric: took 982.268013ms to provisionDockerMachine
	I1101 09:50:19.428839   74584 client.go:176] duration metric: took 21.890685225s to LocalClient.Create
	I1101 09:50:19.428858   74584 start.go:167] duration metric: took 21.89074228s to libmachine.API.Create "addons-086339"
	I1101 09:50:19.428865   74584 start.go:293] postStartSetup for "addons-086339" (driver="kvm2")
	I1101 09:50:19.428874   74584 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:50:19.428936   74584 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:50:19.431801   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.432251   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:19.432273   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.432405   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:19.520001   74584 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:50:19.525231   74584 info.go:137] Remote host: Buildroot 2025.02
	I1101 09:50:19.525259   74584 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-70113/.minikube/addons for local assets ...
	I1101 09:50:19.525321   74584 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-70113/.minikube/files for local assets ...
	I1101 09:50:19.525345   74584 start.go:296] duration metric: took 96.474195ms for postStartSetup
	I1101 09:50:19.528299   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.528696   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:19.528717   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.528916   74584 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/config.json ...
	I1101 09:50:19.529095   74584 start.go:128] duration metric: took 21.992639315s to createHost
	I1101 09:50:19.531331   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.531699   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:19.531722   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.531876   74584 main.go:143] libmachine: Using SSH client type: native
	I1101 09:50:19.532065   74584 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1101 09:50:19.532075   74584 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1101 09:50:19.643235   74584 main.go:143] libmachine: SSH cmd err, output: <nil>: 1761990619.607534656
	
	I1101 09:50:19.643257   74584 fix.go:216] guest clock: 1761990619.607534656
	I1101 09:50:19.643268   74584 fix.go:229] Guest: 2025-11-01 09:50:19.607534656 +0000 UTC Remote: 2025-11-01 09:50:19.52910603 +0000 UTC m=+22.094671738 (delta=78.428626ms)
	I1101 09:50:19.643283   74584 fix.go:200] guest clock delta is within tolerance: 78.428626ms
	I1101 09:50:19.643288   74584 start.go:83] releasing machines lock for "addons-086339", held for 22.106918768s
	I1101 09:50:19.646471   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.646896   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:19.646926   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.647587   74584 ssh_runner.go:195] Run: cat /version.json
	I1101 09:50:19.647618   74584 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:50:19.650456   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.650903   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:19.650929   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.650937   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.651111   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:19.651498   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:19.651548   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.651722   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:19.732914   74584 ssh_runner.go:195] Run: systemctl --version
	I1101 09:50:19.761438   74584 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:50:19.921978   74584 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:50:19.929230   74584 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:50:19.929321   74584 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:50:19.949743   74584 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 09:50:19.949779   74584 start.go:496] detecting cgroup driver to use...
	I1101 09:50:19.949851   74584 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:50:19.969767   74584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:50:19.988383   74584 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:50:19.988445   74584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:50:20.006528   74584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:50:20.025137   74584 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:50:20.177314   74584 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:50:20.388642   74584 docker.go:234] disabling docker service ...
	I1101 09:50:20.388724   74584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:50:20.405986   74584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:50:20.421236   74584 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:50:20.585305   74584 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:50:20.731424   74584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:50:20.748134   74584 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:50:20.778555   74584 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:50:20.778621   74584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:50:20.792483   74584 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 09:50:20.792563   74584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:50:20.806228   74584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:50:20.819314   74584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:50:20.832971   74584 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:50:20.847580   74584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:50:20.861416   74584 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:50:20.884021   74584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:50:20.898082   74584 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:50:20.909995   74584 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 09:50:20.910054   74584 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 09:50:20.932503   74584 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:50:20.945456   74584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:50:21.091518   74584 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:50:21.209311   74584 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:50:21.209394   74584 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:50:21.215638   74584 start.go:564] Will wait 60s for crictl version
	I1101 09:50:21.215718   74584 ssh_runner.go:195] Run: which crictl
	I1101 09:50:21.220104   74584 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 09:50:21.265319   74584 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1101 09:50:21.265428   74584 ssh_runner.go:195] Run: crio --version
	I1101 09:50:21.296407   74584 ssh_runner.go:195] Run: crio --version
	I1101 09:50:21.330270   74584 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1101 09:50:21.333966   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:21.334360   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:21.334382   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:21.334577   74584 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1101 09:50:21.339385   74584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:50:21.355743   74584 kubeadm.go:884] updating cluster {Name:addons-086339 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-086339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:50:21.355864   74584 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:50:21.355925   74584 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:50:21.393026   74584 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1101 09:50:21.393097   74584 ssh_runner.go:195] Run: which lz4
	I1101 09:50:21.397900   74584 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1101 09:50:21.403032   74584 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 09:50:21.403064   74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1101 09:50:22.958959   74584 crio.go:462] duration metric: took 1.561103562s to copy over tarball
	I1101 09:50:22.959030   74584 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 09:50:24.646069   74584 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.687012473s)
	I1101 09:50:24.646110   74584 crio.go:469] duration metric: took 1.687120275s to extract the tarball
	I1101 09:50:24.646124   74584 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 09:50:24.689384   74584 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:50:24.745551   74584 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:50:24.745581   74584 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:50:24.745590   74584 kubeadm.go:935] updating node { 192.168.39.58 8443 v1.34.1 crio true true} ...
	I1101 09:50:24.745676   74584 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-086339 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-086339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:50:24.745742   74584 ssh_runner.go:195] Run: crio config
	I1101 09:50:24.792600   74584 cni.go:84] Creating CNI manager for ""
	I1101 09:50:24.792624   74584 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 09:50:24.792643   74584 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:50:24.792678   74584 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.58 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-086339 NodeName:addons-086339 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:50:24.792797   74584 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.58
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-086339"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.58"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.58"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:50:24.792863   74584 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:50:24.805312   74584 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:50:24.805386   74584 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:50:24.817318   74584 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1101 09:50:24.839738   74584 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:50:24.861206   74584 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1101 09:50:24.882598   74584 ssh_runner.go:195] Run: grep 192.168.39.58	control-plane.minikube.internal$ /etc/hosts
	I1101 09:50:24.887202   74584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.58	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:50:24.903393   74584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:50:25.046563   74584 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:50:25.078339   74584 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339 for IP: 192.168.39.58
	I1101 09:50:25.078373   74584 certs.go:195] generating shared ca certs ...
	I1101 09:50:25.078393   74584 certs.go:227] acquiring lock for ca certs: {Name:mk20731b316fbc22c351241cefc40924880eeba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:50:25.078607   74584 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-70113/.minikube/ca.key
	I1101 09:50:25.370750   74584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-70113/.minikube/ca.crt ...
	I1101 09:50:25.370787   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/ca.crt: {Name:mk44e2ef3879300ef465f5e14a88e17a335203c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:50:25.370979   74584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-70113/.minikube/ca.key ...
	I1101 09:50:25.370991   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/ca.key: {Name:mk6a6a936cb10734e248a5e184dc212d0dd50fee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:50:25.371084   74584 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.key
	I1101 09:50:25.596029   74584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.crt ...
	I1101 09:50:25.596060   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.crt: {Name:mk4883ce1337edc02ddc3ac7b72fc885fc718a43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:50:25.596251   74584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.key ...
	I1101 09:50:25.596263   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.key: {Name:mk64aaf400461d117ff2d2f246459980ad32acba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:50:25.596345   74584 certs.go:257] generating profile certs ...
	I1101 09:50:25.596402   74584 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.key
	I1101 09:50:25.596427   74584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt with IP's: []
	I1101 09:50:25.837595   74584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt ...
	I1101 09:50:25.837629   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt: {Name:mk6a3c2908e98c5011b9a353eff3f73fbb200e4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:50:25.837800   74584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.key ...
	I1101 09:50:25.837814   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.key: {Name:mke495d2d15563b5194e6cade83d0c75b9212db7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:50:25.837890   74584 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.key.698c417c
	I1101 09:50:25.837920   74584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.crt.698c417c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.58]
	I1101 09:50:25.933112   74584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.crt.698c417c ...
	I1101 09:50:25.933142   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.crt.698c417c: {Name:mk0254e8775842aca5cd671155531f1ec86ec40f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:50:25.933311   74584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.key.698c417c ...
	I1101 09:50:25.933328   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.key.698c417c: {Name:mk3e1746ccfcc3989b4b0944f75fafe8929108a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:50:25.933413   74584 certs.go:382] copying /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.crt.698c417c -> /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.crt
	I1101 09:50:25.933491   74584 certs.go:386] copying /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.key.698c417c -> /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.key
	I1101 09:50:25.933552   74584 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/proxy-client.key
	I1101 09:50:25.933569   74584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/proxy-client.crt with IP's: []
	I1101 09:50:26.270478   74584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/proxy-client.crt ...
	I1101 09:50:26.270513   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/proxy-client.crt: {Name:mk40ee0c5f510c6df044b64c5c0ccf02f754f518 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:50:26.270707   74584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/proxy-client.key ...
	I1101 09:50:26.270719   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/proxy-client.key: {Name:mk13d4f8cab34676a9c94f4e51f06fa6b4450e4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:50:26.270893   74584 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:50:26.270934   74584 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem (1082 bytes)
	I1101 09:50:26.270958   74584 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:50:26.270980   74584 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/key.pem (1675 bytes)
	I1101 09:50:26.271524   74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:50:26.304432   74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 09:50:26.336585   74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:50:26.370965   74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:50:26.404637   74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1101 09:50:26.438434   74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 09:50:26.470419   74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:50:26.505400   74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 09:50:26.538739   74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:50:26.571139   74584 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:50:26.596933   74584 ssh_runner.go:195] Run: openssl version
	I1101 09:50:26.604814   74584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:50:26.625168   74584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:50:26.631403   74584 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:50 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:50:26.631463   74584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:50:26.639666   74584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:50:26.655106   74584 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:50:26.660616   74584 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 09:50:26.660681   74584 kubeadm.go:401] StartCluster: {Name:addons-086339 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-086339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:50:26.660767   74584 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:50:26.660830   74584 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:50:26.713279   74584 cri.go:89] found id: ""
	I1101 09:50:26.713354   74584 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:50:26.732360   74584 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 09:50:26.753939   74584 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 09:50:26.768399   74584 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 09:50:26.768428   74584 kubeadm.go:158] found existing configuration files:
	
	I1101 09:50:26.768509   74584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 09:50:26.780652   74584 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 09:50:26.780726   74584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 09:50:26.792996   74584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 09:50:26.805190   74584 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 09:50:26.805252   74584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 09:50:26.817970   74584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 09:50:26.829425   74584 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 09:50:26.829521   74584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 09:50:26.842392   74584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 09:50:26.855031   74584 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 09:50:26.855120   74584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 09:50:26.868465   74584 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1101 09:50:27.034423   74584 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 09:50:40.596085   74584 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 09:50:40.596157   74584 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 09:50:40.596234   74584 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 09:50:40.596323   74584 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 09:50:40.596395   74584 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 09:50:40.596501   74584 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 09:50:40.598485   74584 out.go:252]   - Generating certificates and keys ...
	I1101 09:50:40.598596   74584 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 09:50:40.598677   74584 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 09:50:40.598786   74584 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 09:50:40.598884   74584 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 09:50:40.598965   74584 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 09:50:40.599020   74584 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 09:50:40.599097   74584 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 09:50:40.599235   74584 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-086339 localhost] and IPs [192.168.39.58 127.0.0.1 ::1]
	I1101 09:50:40.599294   74584 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 09:50:40.599486   74584 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-086339 localhost] and IPs [192.168.39.58 127.0.0.1 ::1]
	I1101 09:50:40.599578   74584 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 09:50:40.599671   74584 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 09:50:40.599744   74584 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 09:50:40.599837   74584 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 09:50:40.599908   74584 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 09:50:40.599990   74584 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 09:50:40.600070   74584 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 09:50:40.600159   74584 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 09:50:40.600236   74584 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 09:50:40.600342   74584 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 09:50:40.600430   74584 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 09:50:40.601841   74584 out.go:252]   - Booting up control plane ...
	I1101 09:50:40.601953   74584 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 09:50:40.602064   74584 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 09:50:40.602160   74584 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 09:50:40.602298   74584 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 09:50:40.602458   74584 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 09:50:40.602614   74584 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 09:50:40.602706   74584 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 09:50:40.602764   74584 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 09:50:40.602925   74584 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 09:50:40.603084   74584 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 09:50:40.603174   74584 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002004831s
	I1101 09:50:40.603300   74584 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 09:50:40.603404   74584 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.58:8443/livez
	I1101 09:50:40.603516   74584 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 09:50:40.603630   74584 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 09:50:40.603719   74584 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.147708519s
	I1101 09:50:40.603845   74584 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.505964182s
	I1101 09:50:40.603957   74584 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.503174092s
	I1101 09:50:40.604099   74584 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 09:50:40.604336   74584 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 09:50:40.604410   74584 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 09:50:40.604590   74584 kubeadm.go:319] [mark-control-plane] Marking the node addons-086339 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 09:50:40.604649   74584 kubeadm.go:319] [bootstrap-token] Using token: n6ooj1.g2r52lt9s64k7lzx
	I1101 09:50:40.606300   74584 out.go:252]   - Configuring RBAC rules ...
	I1101 09:50:40.606413   74584 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 09:50:40.606488   74584 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 09:50:40.606682   74584 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 09:50:40.606839   74584 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 09:50:40.607006   74584 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 09:50:40.607114   74584 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 09:50:40.607229   74584 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 09:50:40.607269   74584 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 09:50:40.607307   74584 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 09:50:40.607312   74584 kubeadm.go:319] 
	I1101 09:50:40.607359   74584 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 09:50:40.607364   74584 kubeadm.go:319] 
	I1101 09:50:40.607423   74584 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 09:50:40.607428   74584 kubeadm.go:319] 
	I1101 09:50:40.607448   74584 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 09:50:40.607512   74584 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 09:50:40.607591   74584 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 09:50:40.607600   74584 kubeadm.go:319] 
	I1101 09:50:40.607669   74584 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 09:50:40.607677   74584 kubeadm.go:319] 
	I1101 09:50:40.607717   74584 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 09:50:40.607722   74584 kubeadm.go:319] 
	I1101 09:50:40.607785   74584 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 09:50:40.607880   74584 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 09:50:40.607975   74584 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 09:50:40.607984   74584 kubeadm.go:319] 
	I1101 09:50:40.608100   74584 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 09:50:40.608199   74584 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 09:50:40.608211   74584 kubeadm.go:319] 
	I1101 09:50:40.608275   74584 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token n6ooj1.g2r52lt9s64k7lzx \
	I1101 09:50:40.608412   74584 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ad8ee8749587d4da67d76f75358688c9a611301f34b35f940a9e7fa320504c7a \
	I1101 09:50:40.608438   74584 kubeadm.go:319] 	--control-plane 
	I1101 09:50:40.608444   74584 kubeadm.go:319] 
	I1101 09:50:40.608584   74584 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 09:50:40.608595   74584 kubeadm.go:319] 
	I1101 09:50:40.608701   74584 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token n6ooj1.g2r52lt9s64k7lzx \
	I1101 09:50:40.608845   74584 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ad8ee8749587d4da67d76f75358688c9a611301f34b35f940a9e7fa320504c7a 
	I1101 09:50:40.608868   74584 cni.go:84] Creating CNI manager for ""
	I1101 09:50:40.608880   74584 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 09:50:40.610610   74584 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 09:50:40.612071   74584 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 09:50:40.627372   74584 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1101 09:50:40.653117   74584 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 09:50:40.653226   74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-086339 minikube.k8s.io/updated_at=2025_11_01T09_50_40_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845 minikube.k8s.io/name=addons-086339 minikube.k8s.io/primary=true
	I1101 09:50:40.653234   74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:50:40.841062   74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:50:40.841065   74584 ops.go:34] apiserver oom_adj: -16
	I1101 09:50:41.341444   74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:50:41.841738   74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:50:42.341137   74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:50:42.841859   74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:50:43.341430   74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:50:43.842032   74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:50:44.341776   74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:50:44.842146   74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:50:45.342151   74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:50:45.471694   74584 kubeadm.go:1114] duration metric: took 4.818566134s to wait for elevateKubeSystemPrivileges
	I1101 09:50:45.471741   74584 kubeadm.go:403] duration metric: took 18.811065248s to StartCluster
	I1101 09:50:45.471765   74584 settings.go:142] acquiring lock: {Name:mk26e3d3b2448df59827bb1be60cde1d117dbc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:50:45.471940   74584 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-70113/kubeconfig
	I1101 09:50:45.472382   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/kubeconfig: {Name:mk1fa0677a0758214359bfdd6f326495ee5fd60e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:50:45.472671   74584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 09:50:45.472717   74584 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:50:45.472765   74584 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1101 09:50:45.472916   74584 addons.go:70] Setting yakd=true in profile "addons-086339"
	I1101 09:50:45.472917   74584 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-086339"
	I1101 09:50:45.472959   74584 addons.go:239] Setting addon yakd=true in "addons-086339"
	I1101 09:50:45.472963   74584 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-086339"
	I1101 09:50:45.472976   74584 addons.go:70] Setting registry=true in profile "addons-086339"
	I1101 09:50:45.472991   74584 addons.go:239] Setting addon registry=true in "addons-086339"
	I1101 09:50:45.473004   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.473010   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.473012   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.473003   74584 addons.go:70] Setting metrics-server=true in profile "addons-086339"
	I1101 09:50:45.473051   74584 addons.go:70] Setting registry-creds=true in profile "addons-086339"
	I1101 09:50:45.473068   74584 addons.go:239] Setting addon metrics-server=true in "addons-086339"
	I1101 09:50:45.473084   74584 addons.go:239] Setting addon registry-creds=true in "addons-086339"
	I1101 09:50:45.473121   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.473144   74584 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-086339"
	I1101 09:50:45.473150   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.473175   74584 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-086339"
	I1101 09:50:45.473203   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.473564   74584 addons.go:70] Setting volcano=true in profile "addons-086339"
	I1101 09:50:45.473589   74584 addons.go:239] Setting addon volcano=true in "addons-086339"
	I1101 09:50:45.473622   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.473737   74584 addons.go:70] Setting gcp-auth=true in profile "addons-086339"
	I1101 09:50:45.473786   74584 mustload.go:66] Loading cluster: addons-086339
	I1101 09:50:45.474010   74584 config.go:182] Loaded profile config "addons-086339": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:50:45.474219   74584 addons.go:70] Setting ingress-dns=true in profile "addons-086339"
	I1101 09:50:45.474254   74584 addons.go:239] Setting addon ingress-dns=true in "addons-086339"
	I1101 09:50:45.474313   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.472963   74584 config.go:182] Loaded profile config "addons-086339": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:50:45.473011   74584 addons.go:70] Setting storage-provisioner=true in profile "addons-086339"
	I1101 09:50:45.474667   74584 addons.go:239] Setting addon storage-provisioner=true in "addons-086339"
	I1101 09:50:45.474685   74584 addons.go:70] Setting cloud-spanner=true in profile "addons-086339"
	I1101 09:50:45.474699   74584 addons.go:239] Setting addon cloud-spanner=true in "addons-086339"
	I1101 09:50:45.474703   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.474721   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.474993   74584 addons.go:70] Setting volumesnapshots=true in profile "addons-086339"
	I1101 09:50:45.475011   74584 addons.go:239] Setting addon volumesnapshots=true in "addons-086339"
	I1101 09:50:45.475031   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.475344   74584 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-086339"
	I1101 09:50:45.475368   74584 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-086339"
	I1101 09:50:45.475372   74584 addons.go:70] Setting default-storageclass=true in profile "addons-086339"
	I1101 09:50:45.475392   74584 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-086339"
	I1101 09:50:45.475482   74584 addons.go:70] Setting ingress=true in profile "addons-086339"
	I1101 09:50:45.475497   74584 addons.go:239] Setting addon ingress=true in "addons-086339"
	I1101 09:50:45.475549   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.474669   74584 addons.go:70] Setting inspektor-gadget=true in profile "addons-086339"
	I1101 09:50:45.475789   74584 addons.go:239] Setting addon inspektor-gadget=true in "addons-086339"
	I1101 09:50:45.475796   74584 out.go:179] * Verifying Kubernetes components...
	I1101 09:50:45.475819   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.474680   74584 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-086339"
	I1101 09:50:45.476065   74584 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-086339"
	I1101 09:50:45.476115   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.477255   74584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:50:45.480031   74584 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1101 09:50:45.480031   74584 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1101 09:50:45.480033   74584 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	W1101 09:50:45.481113   74584 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1101 09:50:45.481446   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.484726   74584 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 09:50:45.484753   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1101 09:50:45.484938   74584 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1101 09:50:45.484960   74584 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 09:50:45.484966   74584 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1101 09:50:45.484973   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1101 09:50:45.485125   74584 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1101 09:50:45.485153   74584 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1101 09:50:45.485273   74584 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-086339"
	I1101 09:50:45.485691   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.485920   74584 addons.go:239] Setting addon default-storageclass=true in "addons-086339"
	I1101 09:50:45.485962   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.487450   74584 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1101 09:50:45.487459   74584 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1101 09:50:45.487484   74584 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1101 09:50:45.487497   74584 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1101 09:50:45.487517   74584 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:50:45.487560   74584 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1101 09:50:45.487563   74584 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1101 09:50:45.488316   74584 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1101 09:50:45.488329   74584 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 09:50:45.488348   74584 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 09:50:45.489625   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1101 09:50:45.489651   74584 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 09:50:45.489699   74584 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:50:45.489902   74584 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:50:45.490208   74584 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1101 09:50:45.490224   74584 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1101 09:50:45.490262   74584 addons.go:436] installing /etc/kubernetes/addons/ig-crd.yaml
	I1101 09:50:45.490750   74584 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1101 09:50:45.491163   74584 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1101 09:50:45.491557   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1101 09:50:45.491173   74584 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1101 09:50:45.491207   74584 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:50:45.491713   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:50:45.491208   74584 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 09:50:45.491791   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1101 09:50:45.491917   74584 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 09:50:45.492081   74584 out.go:179]   - Using image docker.io/registry:3.0.0
	I1101 09:50:45.492774   74584 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1101 09:50:45.493050   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.493676   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.494048   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.494216   74584 out.go:179]   - Using image docker.io/busybox:stable
	I1101 09:50:45.494271   74584 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1101 09:50:45.494283   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1101 09:50:45.494189   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:45.494412   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.495222   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:45.495346   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.495450   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:45.495550   74584 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1101 09:50:45.495608   74584 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 09:50:45.495670   74584 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 09:50:45.495688   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1101 09:50:45.495797   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:45.495840   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.496406   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:45.496819   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:45.497603   74584 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 09:50:45.497622   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1101 09:50:45.498607   74584 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1101 09:50:45.500140   74584 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1101 09:50:45.500156   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.500745   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.500905   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.501448   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:45.501490   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.501945   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.502137   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.502129   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:45.502357   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:45.502386   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.502479   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.502618   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:45.502659   74584 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1101 09:50:45.502626   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:45.502671   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.502621   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.503336   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:45.503381   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.503456   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:45.503481   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.503494   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.503740   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:45.503831   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:45.503858   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.503858   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:45.503886   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:45.504294   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:45.504670   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:45.504706   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.504708   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.504783   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:45.504812   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.504989   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:45.505241   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:45.505275   74584 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1101 09:50:45.505416   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:45.505439   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.505646   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:45.505919   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.506301   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:45.506330   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.506479   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:45.506657   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.507207   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:45.507243   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.507456   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:45.507843   74584 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1101 09:50:45.509235   74584 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1101 09:50:45.509251   74584 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1101 09:50:45.511923   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.512313   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:45.512339   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.512478   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	W1101 09:50:45.863592   74584 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:56024->192.168.39.58:22: read: connection reset by peer
	I1101 09:50:45.863626   74584 retry.go:31] will retry after 353.468022ms: ssh: handshake failed: read tcp 192.168.39.1:56024->192.168.39.58:22: read: connection reset by peer
	W1101 09:50:45.863706   74584 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:56030->192.168.39.58:22: read: connection reset by peer
	I1101 09:50:45.863718   74584 retry.go:31] will retry after 366.435822ms: ssh: handshake failed: read tcp 192.168.39.1:56030->192.168.39.58:22: read: connection reset by peer
	I1101 09:50:46.204700   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:50:46.344397   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 09:50:46.364416   74584 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 09:50:46.364443   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1101 09:50:46.382914   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 09:50:46.401116   74584 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1101 09:50:46.401152   74584 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1101 09:50:46.499674   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:50:46.525387   74584 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1101 09:50:46.525422   74584 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1101 09:50:46.528653   74584 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:50:46.528683   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1101 09:50:46.537039   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 09:50:46.585103   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1101 09:50:46.700077   74584 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1101 09:50:46.700117   74584 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1101 09:50:46.802990   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 09:50:46.845193   74584 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1101 09:50:46.845228   74584 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1101 09:50:46.948887   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:50:47.114091   74584 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1101 09:50:47.114126   74584 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1101 09:50:47.173908   74584 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.701178901s)
	I1101 09:50:47.173921   74584 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.696642998s)
	I1101 09:50:47.173999   74584 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:50:47.174095   74584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 09:50:47.203736   74584 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 09:50:47.203782   74584 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 09:50:47.327504   74584 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1101 09:50:47.327541   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1101 09:50:47.447307   74584 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1101 09:50:47.447333   74584 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1101 09:50:47.479289   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 09:50:47.516143   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 09:50:47.537776   74584 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1101 09:50:47.537808   74584 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1101 09:50:47.602456   74584 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 09:50:47.602492   74584 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 09:50:47.634301   74584 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1101 09:50:47.634334   74584 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1101 09:50:47.666382   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1101 09:50:47.896414   74584 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1101 09:50:47.896454   74584 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1101 09:50:48.070881   74584 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1101 09:50:48.070918   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1101 09:50:48.088172   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 09:50:48.112581   74584 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1101 09:50:48.112615   74584 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1101 09:50:48.384804   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.180058223s)
	I1101 09:50:48.433222   74584 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1101 09:50:48.433251   74584 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1101 09:50:48.570103   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1101 09:50:48.712201   74584 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1101 09:50:48.712239   74584 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1101 09:50:48.761409   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (2.41696863s)
	I1101 09:50:49.019503   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.636542693s)
	I1101 09:50:49.055833   74584 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1101 09:50:49.055864   74584 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1101 09:50:49.130302   74584 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 09:50:49.130330   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1101 09:50:49.321757   74584 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1101 09:50:49.321783   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1101 09:50:49.571119   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 09:50:49.804708   74584 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1101 09:50:49.804738   74584 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1101 09:50:49.962509   74584 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1101 09:50:49.962544   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1101 09:50:50.281087   74584 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1101 09:50:50.281117   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1101 09:50:50.772055   74584 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 09:50:50.772080   74584 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1101 09:50:51.239409   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 09:50:52.962797   74584 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1101 09:50:52.966311   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:52.966764   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:52.966789   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:52.966934   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:53.227038   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.727328057s)
	I1101 09:50:53.227151   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.69006708s)
	I1101 09:50:53.227189   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.642046598s)
	I1101 09:50:53.227242   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.424224705s)
	I1101 09:50:53.376728   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (6.427801852s)
	W1101 09:50:53.376771   74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:50:53.376826   74584 retry.go:31] will retry after 359.696332ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:50:53.376871   74584 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.202843079s)
	I1101 09:50:53.376921   74584 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (6.202805311s)
	I1101 09:50:53.376950   74584 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1101 09:50:53.377909   74584 node_ready.go:35] waiting up to 6m0s for node "addons-086339" to be "Ready" ...
	I1101 09:50:53.462748   74584 node_ready.go:49] node "addons-086339" is "Ready"
	I1101 09:50:53.462778   74584 node_ready.go:38] duration metric: took 84.807458ms for node "addons-086339" to be "Ready" ...
	I1101 09:50:53.462793   74584 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:50:53.462847   74584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:50:53.534003   74584 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1101 09:50:53.650576   74584 addons.go:239] Setting addon gcp-auth=true in "addons-086339"
	I1101 09:50:53.650630   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:53.652687   74584 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1101 09:50:53.655511   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:53.655896   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:53.655920   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:53.656060   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:53.737577   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:50:53.969325   74584 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-086339" context rescaled to 1 replicas
	I1101 09:50:55.148780   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.669443662s)
	I1101 09:50:55.148826   74584 addons.go:480] Verifying addon ingress=true in "addons-086339"
	I1101 09:50:55.148852   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.632675065s)
	I1101 09:50:55.148956   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.482535978s)
	I1101 09:50:55.149057   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.060852546s)
	I1101 09:50:55.149064   74584 addons.go:480] Verifying addon registry=true in "addons-086339"
	I1101 09:50:55.149094   74584 addons.go:480] Verifying addon metrics-server=true in "addons-086339"
	I1101 09:50:55.149162   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.579011593s)
	I1101 09:50:55.150934   74584 out.go:179] * Verifying ingress addon...
	I1101 09:50:55.150992   74584 out.go:179] * Verifying registry addon...
	I1101 09:50:55.151019   74584 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-086339 service yakd-dashboard -n yakd-dashboard
	
	I1101 09:50:55.152636   74584 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1101 09:50:55.152833   74584 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1101 09:50:55.236576   74584 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1101 09:50:55.236603   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:50:55.236704   74584 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1101 09:50:55.236726   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:50:55.608860   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.037686923s)
	W1101 09:50:55.608910   74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 09:50:55.608932   74584 retry.go:31] will retry after 233.800882ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 09:50:55.697978   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:50:55.698030   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:50:55.843247   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 09:50:56.241749   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:50:56.241968   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:50:56.550655   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.311175816s)
	I1101 09:50:56.550716   74584 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-086339"
	I1101 09:50:56.550663   74584 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.087794232s)
	I1101 09:50:56.550810   74584 api_server.go:72] duration metric: took 11.078058308s to wait for apiserver process to appear ...
	I1101 09:50:56.550891   74584 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:50:56.550935   74584 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1101 09:50:56.552309   74584 out.go:179] * Verifying csi-hostpath-driver addon...
	I1101 09:50:56.554454   74584 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1101 09:50:56.566874   74584 api_server.go:279] https://192.168.39.58:8443/healthz returned 200:
	ok
	I1101 09:50:56.569220   74584 api_server.go:141] control plane version: v1.34.1
	I1101 09:50:56.569247   74584 api_server.go:131] duration metric: took 18.347182ms to wait for apiserver health ...
	I1101 09:50:56.569258   74584 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:50:56.586752   74584 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1101 09:50:56.586776   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:50:56.587214   74584 system_pods.go:59] 20 kube-system pods found
	I1101 09:50:56.587266   74584 system_pods.go:61] "amd-gpu-device-plugin-lr4lw" [bee1e3ae-5d43-4b43-a348-0e04ec066093] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 09:50:56.587277   74584 system_pods.go:61] "coredns-66bc5c9577-5v6h7" [ff58ca9c-6949-4ab8-b8ff-8be8e7b75757] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:50:56.587289   74584 system_pods.go:61] "coredns-66bc5c9577-vsbrs" [c3a65dae-82f4-4f33-b460-fa45a39b3342] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:50:56.587297   74584 system_pods.go:61] "csi-hostpath-attacher-0" [50e03a30-f2e9-4ec1-ba85-6da2654030c9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 09:50:56.587304   74584 system_pods.go:61] "csi-hostpath-resizer-0" [d2c565f0-80a3-4b2d-a99b-edc1d7ae4fe2] Pending
	I1101 09:50:56.587318   74584 system_pods.go:61] "csi-hostpathplugin-z7vjp" [96e87cd6-068d-40af-9966-b875b9a7629e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 09:50:56.587325   74584 system_pods.go:61] "etcd-addons-086339" [f17e5eab-51c0-409a-9bb3-3cb5e71200fd] Running
	I1101 09:50:56.587336   74584 system_pods.go:61] "kube-apiserver-addons-086339" [51b3d29f-af5e-441a-b3c0-754241fc92bc] Running
	I1101 09:50:56.587343   74584 system_pods.go:61] "kube-controller-manager-addons-086339" [62d54b81-f6bc-4bdc-bd22-c8a6fc39a043] Running
	I1101 09:50:56.587352   74584 system_pods.go:61] "kube-ingress-dns-minikube" [e328fd3e-a381-414d-ba99-1aa6f7f40585] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 09:50:56.587357   74584 system_pods.go:61] "kube-proxy-7fck9" [a834adcc-b0ec-4cad-8944-bea90a627787] Running
	I1101 09:50:56.587365   74584 system_pods.go:61] "kube-scheduler-addons-086339" [4db76834-5184-4a83-a228-35e83abc8c9d] Running
	I1101 09:50:56.587372   74584 system_pods.go:61] "metrics-server-85b7d694d7-6lx9r" [c4e44e90-7d77-43fc-913f-f26877e52760] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:50:56.587378   74584 system_pods.go:61] "nvidia-device-plugin-daemonset-jh2xq" [0a9234e2-8d6a-4110-86be-ff05f9be1a29] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 09:50:56.587387   74584 system_pods.go:61] "registry-6b586f9694-8zvc5" [23d65f21-71d0-4da4-8f2f-5b59f93f9085] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:50:56.587395   74584 system_pods.go:61] "registry-creds-764b6fb674-ztjtq" [ae641ce9-b248-46a3-8e01-9d25e8d29825] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:50:56.587408   74584 system_pods.go:61] "registry-proxy-4p4n9" [73d260fc-8c68-439c-a460-208cdb29b271] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 09:50:56.587416   74584 system_pods.go:61] "snapshot-controller-7d9fbc56b8-4kwxj" [e301a0c5-17dc-43be-9fd5-c14b76c1b92c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:50:56.587429   74584 system_pods.go:61] "snapshot-controller-7d9fbc56b8-wzgp7" [4c770fa7-174c-43ab-ac63-635b19152843] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:50:56.587437   74584 system_pods.go:61] "storage-provisioner" [4c394064-33ff-4fd0-a4bc-afb948952ac6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:50:56.587448   74584 system_pods.go:74] duration metric: took 18.182475ms to wait for pod list to return data ...
	I1101 09:50:56.587460   74584 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:50:56.596967   74584 default_sa.go:45] found service account: "default"
	I1101 09:50:56.596990   74584 default_sa.go:55] duration metric: took 9.524828ms for default service account to be created ...
	I1101 09:50:56.596999   74584 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:50:56.613956   74584 system_pods.go:86] 20 kube-system pods found
	I1101 09:50:56.613988   74584 system_pods.go:89] "amd-gpu-device-plugin-lr4lw" [bee1e3ae-5d43-4b43-a348-0e04ec066093] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 09:50:56.613995   74584 system_pods.go:89] "coredns-66bc5c9577-5v6h7" [ff58ca9c-6949-4ab8-b8ff-8be8e7b75757] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:50:56.614003   74584 system_pods.go:89] "coredns-66bc5c9577-vsbrs" [c3a65dae-82f4-4f33-b460-fa45a39b3342] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:50:56.614009   74584 system_pods.go:89] "csi-hostpath-attacher-0" [50e03a30-f2e9-4ec1-ba85-6da2654030c9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 09:50:56.614014   74584 system_pods.go:89] "csi-hostpath-resizer-0" [d2c565f0-80a3-4b2d-a99b-edc1d7ae4fe2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 09:50:56.614020   74584 system_pods.go:89] "csi-hostpathplugin-z7vjp" [96e87cd6-068d-40af-9966-b875b9a7629e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 09:50:56.614023   74584 system_pods.go:89] "etcd-addons-086339" [f17e5eab-51c0-409a-9bb3-3cb5e71200fd] Running
	I1101 09:50:56.614028   74584 system_pods.go:89] "kube-apiserver-addons-086339" [51b3d29f-af5e-441a-b3c0-754241fc92bc] Running
	I1101 09:50:56.614033   74584 system_pods.go:89] "kube-controller-manager-addons-086339" [62d54b81-f6bc-4bdc-bd22-c8a6fc39a043] Running
	I1101 09:50:56.614040   74584 system_pods.go:89] "kube-ingress-dns-minikube" [e328fd3e-a381-414d-ba99-1aa6f7f40585] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 09:50:56.614045   74584 system_pods.go:89] "kube-proxy-7fck9" [a834adcc-b0ec-4cad-8944-bea90a627787] Running
	I1101 09:50:56.614051   74584 system_pods.go:89] "kube-scheduler-addons-086339" [4db76834-5184-4a83-a228-35e83abc8c9d] Running
	I1101 09:50:56.614058   74584 system_pods.go:89] "metrics-server-85b7d694d7-6lx9r" [c4e44e90-7d77-43fc-913f-f26877e52760] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:50:56.614073   74584 system_pods.go:89] "nvidia-device-plugin-daemonset-jh2xq" [0a9234e2-8d6a-4110-86be-ff05f9be1a29] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 09:50:56.614089   74584 system_pods.go:89] "registry-6b586f9694-8zvc5" [23d65f21-71d0-4da4-8f2f-5b59f93f9085] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:50:56.614095   74584 system_pods.go:89] "registry-creds-764b6fb674-ztjtq" [ae641ce9-b248-46a3-8e01-9d25e8d29825] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:50:56.614100   74584 system_pods.go:89] "registry-proxy-4p4n9" [73d260fc-8c68-439c-a460-208cdb29b271] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 09:50:56.614105   74584 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4kwxj" [e301a0c5-17dc-43be-9fd5-c14b76c1b92c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:50:56.614114   74584 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wzgp7" [4c770fa7-174c-43ab-ac63-635b19152843] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:50:56.614118   74584 system_pods.go:89] "storage-provisioner" [4c394064-33ff-4fd0-a4bc-afb948952ac6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:50:56.614126   74584 system_pods.go:126] duration metric: took 17.122448ms to wait for k8s-apps to be running ...
	I1101 09:50:56.614136   74584 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:50:56.614196   74584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:50:56.662305   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:50:56.676451   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:50:57.009640   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.27202291s)
	W1101 09:50:57.009684   74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:50:57.009709   74584 retry.go:31] will retry after 295.092784ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:50:57.009722   74584 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.357005393s)
	I1101 09:50:57.011440   74584 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 09:50:57.012826   74584 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1101 09:50:57.014068   74584 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1101 09:50:57.014084   74584 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1101 09:50:57.060410   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:50:57.092501   74584 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1101 09:50:57.092526   74584 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1101 09:50:57.163456   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:50:57.166739   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:50:57.235815   74584 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 09:50:57.235844   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1101 09:50:57.305656   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:50:57.336319   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 09:50:57.561645   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:50:57.662574   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:50:57.663877   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:50:58.063249   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:50:58.157346   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:50:58.162591   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:50:58.566038   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:50:58.574812   74584 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.96059055s)
	I1101 09:50:58.574848   74584 system_svc.go:56] duration metric: took 1.960707525s WaitForService to wait for kubelet
	I1101 09:50:58.574856   74584 kubeadm.go:587] duration metric: took 13.102108035s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:50:58.574874   74584 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:50:58.575108   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.73180936s)
	I1101 09:50:58.586405   74584 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 09:50:58.586436   74584 node_conditions.go:123] node cpu capacity is 2
	I1101 09:50:58.586457   74584 node_conditions.go:105] duration metric: took 11.577545ms to run NodePressure ...
	I1101 09:50:58.586472   74584 start.go:242] waiting for startup goroutines ...
	I1101 09:50:58.664635   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:50:58.665016   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:50:59.063972   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:50:59.170042   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:50:59.176798   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:50:59.577259   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:50:59.664063   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:50:59.665180   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:00.063306   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:00.173864   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:00.174338   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.868634982s)
	W1101 09:51:00.174389   74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:00.174423   74584 retry.go:31] will retry after 509.276592ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:00.174461   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.838092131s)
	I1101 09:51:00.175590   74584 addons.go:480] Verifying addon gcp-auth=true in "addons-086339"
	I1101 09:51:00.176082   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:00.177144   74584 out.go:179] * Verifying gcp-auth addon...
	I1101 09:51:00.179153   74584 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1101 09:51:00.185078   74584 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1101 09:51:00.185104   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:00.569905   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:00.666711   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:00.668288   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:00.684564   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:51:00.685802   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:01.058804   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:01.162413   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:01.162519   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:01.184967   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:01.561792   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:01.660578   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:01.660604   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:01.687510   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:02.048703   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.364096236s)
	W1101 09:51:02.048744   74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:02.048770   74584 retry.go:31] will retry after 922.440306ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:02.058033   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:02.156454   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:02.156517   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:02.184626   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:02.560632   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:02.663377   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:02.663392   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:02.682802   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:02.972204   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:51:03.066417   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:03.162498   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:03.164331   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:03.185238   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:03.558965   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:03.660685   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:03.662797   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:03.683857   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:03.988155   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.015906584s)
	W1101 09:51:03.988197   74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:03.988221   74584 retry.go:31] will retry after 1.512024934s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:04.059661   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:04.158989   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:04.159171   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:04.184262   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:04.559848   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:04.665219   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:04.666152   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:04.684684   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:05.059373   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:05.157706   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:05.158120   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:05.184998   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:05.500748   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:51:05.560240   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:05.659023   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:05.660031   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:05.684729   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:06.059474   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:06.157196   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:06.157311   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:06.182088   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 09:51:06.269741   74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:06.269786   74584 retry.go:31] will retry after 2.204116799s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:06.559209   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:06.657408   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:06.657492   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:06.683284   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:07.059744   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:07.160264   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:07.160549   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:07.183753   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:07.558791   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:07.658454   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:07.662675   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:07.684198   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:08.065874   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:08.160732   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:08.161495   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:08.182870   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:08.474158   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:51:08.564218   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:08.659007   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:08.661853   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:08.684365   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:09.062466   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:09.159228   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:09.159372   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:09.183927   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:09.561230   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:09.664415   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:09.666273   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:09.684865   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:09.700010   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.225813085s)
	W1101 09:51:09.700056   74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:09.700081   74584 retry.go:31] will retry after 3.484047661s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:10.059617   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:10.156799   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:10.156883   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:10.183999   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:10.560483   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:10.661603   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:10.661780   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:10.686351   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:11.081718   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:11.188353   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:11.188507   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:11.188624   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:11.558634   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:11.660662   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:11.663221   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:11.683762   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:12.059387   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:12.156602   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:12.156961   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:12.183069   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:12.558360   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:12.657779   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:12.659195   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:12.684167   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:13.059425   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:13.159273   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:13.159720   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:13.182662   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:13.184729   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:51:13.558837   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:13.659127   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:13.659431   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:13.682290   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 09:51:14.013627   74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:14.013674   74584 retry.go:31] will retry after 3.772853511s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:14.060473   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:14.168480   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:14.168525   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:14.195048   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:14.559885   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:14.655949   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:14.656674   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:14.682561   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:15.059773   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:15.158683   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:15.158997   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:15.185198   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:15.559183   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:15.657568   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:15.657667   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:15.683337   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:16.059611   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:16.156727   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:16.158488   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:16.182596   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:16.558923   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:16.656902   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:16.657753   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:16.683813   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:17.059799   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:17.157794   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:17.158058   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:17.183320   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:17.562511   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:17.661802   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:17.663610   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:17.683753   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:17.786898   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:51:18.062486   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:18.165903   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:18.166305   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:18.185036   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:18.563358   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:18.661780   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:18.664168   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:18.686501   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:19.062933   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:19.159993   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.373047606s)
	W1101 09:51:19.160054   74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:19.160090   74584 retry.go:31] will retry after 8.062833615s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:19.160265   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:19.161792   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:19.187129   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:19.562165   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:19.662490   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:19.662887   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:19.685224   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:20.062452   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:20.158649   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:20.158963   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:20.185553   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:20.560324   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:20.663470   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:20.664773   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:20.687217   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:21.058336   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:21.158067   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:21.158764   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:21.184179   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:21.562709   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:21.660636   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:21.661331   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:21.683251   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:22.058468   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:22.158449   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:22.161441   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:22.183647   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:22.559209   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:22.657596   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:22.658067   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:22.684022   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:23.060587   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:23.159313   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:23.160492   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:23.183233   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:23.577231   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:23.658412   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:23.661233   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:23.684740   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:24.059042   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:24.157394   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:24.158911   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:24.182864   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:24.559933   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:24.657638   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:24.661214   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:24.686127   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:25.059953   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:25.158151   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:25.160939   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:25.183657   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:25.565339   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:25.663990   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:25.664201   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:25.683465   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:26.059376   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:26.158991   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:26.159088   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:26.184884   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:26.559386   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:26.657922   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:26.660583   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:26.683688   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:27.058939   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:27.156101   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:27.156998   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:27.182909   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:27.224025   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:51:27.562477   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:27.660651   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:27.662259   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:27.681905   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:28.059984   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:28.160493   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:28.162286   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:28.186135   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 09:51:28.200979   74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:28.201029   74584 retry.go:31] will retry after 10.395817371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:28.558989   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:28.657430   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:28.660330   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:28.683885   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:29.061934   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:29.157765   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:29.157917   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:29.184278   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:29.560897   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:29.657774   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:29.657838   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:29.683106   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:30.059693   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:30.160732   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:30.166378   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:30.265635   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:30.558787   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:30.656060   74584 kapi.go:107] duration metric: took 35.503223323s to wait for kubernetes.io/minikube-addons=registry ...
	I1101 09:51:30.656373   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:30.682215   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:31.059187   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:31.157561   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:31.258067   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:31.560106   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:31.657305   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:31.683226   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:32.059058   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:32.158395   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:32.182943   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:32.559674   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:32.660135   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:32.684028   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:33.059220   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:33.159029   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:33.189054   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:33.699380   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:33.699471   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:33.700370   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:34.059307   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:34.158409   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:34.189459   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:34.558736   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:34.656864   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:34.682855   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:35.058847   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:35.156770   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:35.182411   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:35.559605   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:35.657060   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:35.682886   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:36.059230   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:36.158265   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:36.185067   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:36.562462   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:36.657785   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:36.684734   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:37.059270   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:37.156638   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:37.184172   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:37.558438   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:37.656955   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:37.684255   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:38.061827   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:38.157365   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:38.182685   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:38.560831   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:38.597843   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:51:38.656804   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:38.686009   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:39.061543   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:39.158425   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:39.183760   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:39.559306   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:39.657197   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:39.684893   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:39.748441   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.150549422s)
	W1101 09:51:39.748504   74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:39.748545   74584 retry.go:31] will retry after 20.354212059s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:40.091278   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:40.159135   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:40.189976   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:40.561293   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:40.657506   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:40.682812   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:41.059036   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:41.157077   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:41.183024   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:41.560657   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:41.662059   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:41.686139   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:42.059712   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:42.158078   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:42.184717   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:42.558428   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:42.657474   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:42.682401   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:43.061067   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:43.159023   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:43.182945   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:43.559721   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:43.658905   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:43.683665   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:44.059768   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:44.156686   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:44.182520   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:44.558486   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:44.659410   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:44.686714   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:45.059691   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:45.161012   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:45.186846   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:45.566991   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:45.661771   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:45.683563   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:46.061274   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:46.157945   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:46.184842   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:46.559462   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:46.659702   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:46.682680   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:47.058242   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:47.159894   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:47.185416   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:47.561755   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:47.660011   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:47.683518   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:48.061815   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:48.158606   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:48.186741   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:48.562551   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:48.660513   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:48.683374   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:49.061955   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:49.158516   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:49.182835   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:49.558347   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:49.660756   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:49.685651   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:50.059457   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:50.161169   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:50.185382   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:50.560490   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:50.667931   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:50.691744   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:51.060229   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:51.163272   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:51.185468   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:51.561847   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:51.657559   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:51.684472   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:52.065897   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:52.165405   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:52.184183   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:52.558429   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:52.659763   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:52.687124   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:53.060334   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:53.159793   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:53.260599   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:53.836679   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:53.844731   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:53.846382   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:54.061169   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:54.160164   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:54.184130   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:54.559624   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:54.660771   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:54.683387   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:55.060182   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:55.158098   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:55.184607   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:55.568135   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:55.666901   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:55.688352   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:56.061312   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:56.160289   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:56.183561   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:56.559442   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:56.666114   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:56.686070   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:57.059598   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:57.157253   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:57.184083   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:57.559370   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:57.657282   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:57.684369   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:58.059645   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:58.160950   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:58.183605   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:58.559980   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:58.660720   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:58.682723   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:59.061658   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:59.161368   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:59.186554   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:59.562493   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:59.658000   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:59.686396   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:00.059261   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:00.103310   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:52:00.158774   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:00.183231   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:00.562324   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:00.659611   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:00.682795   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:01.061408   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:01.158866   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:01.188200   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:01.344727   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.241365643s)
	W1101 09:52:01.344783   74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:52:01.344810   74584 retry.go:31] will retry after 24.70836809s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:52:01.558702   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:01.657288   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:01.683224   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:02.061177   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:02.158031   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:02.185134   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:02.559729   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:02.661884   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:02.684276   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:03.058102   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:03.159115   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:03.184840   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:03.559718   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:03.658993   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:03.682755   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:04.061600   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:04.157504   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:04.182206   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:04.558833   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:04.658122   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:04.690795   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:05.060282   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:05.159649   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:05.182512   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:05.558584   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:05.657372   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:05.682747   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:06.059347   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:06.156954   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:06.184088   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:06.559677   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:06.657737   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:06.683063   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:07.058922   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:07.156647   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:07.183210   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:07.559741   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:07.656366   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:07.684732   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:08.060305   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:08.161326   74584 kapi.go:107] duration metric: took 1m13.008685899s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1101 09:52:08.184485   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:08.563527   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:08.684225   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:09.062454   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:09.183134   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:09.559703   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:09.683034   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:10.059517   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:10.183595   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:10.559051   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:10.684292   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:11.060725   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:11.184057   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:11.560407   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:11.684061   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:12.059623   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:12.338951   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:12.563238   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:12.687086   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:13.065805   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:13.186970   74584 kapi.go:107] duration metric: took 1m13.007813603s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1101 09:52:13.188654   74584 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-086339 cluster.
	I1101 09:52:13.190102   74584 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1101 09:52:13.191551   74584 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1101 09:52:13.561959   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:14.059590   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:14.558397   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:15.059526   74584 kapi.go:107] duration metric: took 1m18.505070405s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1101 09:52:26.053439   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1101 09:52:26.787218   74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 09:52:26.787354   74584 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1101 09:52:26.789142   74584 out.go:179] * Enabled addons: default-storageclass, registry-creds, amd-gpu-device-plugin, storage-provisioner, nvidia-device-plugin, cloud-spanner, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1101 09:52:26.790527   74584 addons.go:515] duration metric: took 1m41.317758805s for enable addons: enabled=[default-storageclass registry-creds amd-gpu-device-plugin storage-provisioner nvidia-device-plugin cloud-spanner ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1101 09:52:26.790585   74584 start.go:247] waiting for cluster config update ...
	I1101 09:52:26.790606   74584 start.go:256] writing updated cluster config ...
	I1101 09:52:26.790869   74584 ssh_runner.go:195] Run: rm -f paused
	I1101 09:52:26.797220   74584 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:52:26.802135   74584 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vsbrs" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:52:26.807671   74584 pod_ready.go:94] pod "coredns-66bc5c9577-vsbrs" is "Ready"
	I1101 09:52:26.807696   74584 pod_ready.go:86] duration metric: took 5.533544ms for pod "coredns-66bc5c9577-vsbrs" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:52:26.809972   74584 pod_ready.go:83] waiting for pod "etcd-addons-086339" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:52:26.815396   74584 pod_ready.go:94] pod "etcd-addons-086339" is "Ready"
	I1101 09:52:26.815421   74584 pod_ready.go:86] duration metric: took 5.421578ms for pod "etcd-addons-086339" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:52:26.818352   74584 pod_ready.go:83] waiting for pod "kube-apiserver-addons-086339" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:52:26.823369   74584 pod_ready.go:94] pod "kube-apiserver-addons-086339" is "Ready"
	I1101 09:52:26.823403   74584 pod_ready.go:86] duration metric: took 5.02397ms for pod "kube-apiserver-addons-086339" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:52:26.825247   74584 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-086339" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:52:27.201328   74584 pod_ready.go:94] pod "kube-controller-manager-addons-086339" is "Ready"
	I1101 09:52:27.201355   74584 pod_ready.go:86] duration metric: took 376.08311ms for pod "kube-controller-manager-addons-086339" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:52:27.402263   74584 pod_ready.go:83] waiting for pod "kube-proxy-7fck9" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:52:27.802591   74584 pod_ready.go:94] pod "kube-proxy-7fck9" is "Ready"
	I1101 09:52:27.802625   74584 pod_ready.go:86] duration metric: took 400.328354ms for pod "kube-proxy-7fck9" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:52:28.002425   74584 pod_ready.go:83] waiting for pod "kube-scheduler-addons-086339" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:52:28.401943   74584 pod_ready.go:94] pod "kube-scheduler-addons-086339" is "Ready"
	I1101 09:52:28.401969   74584 pod_ready.go:86] duration metric: took 399.516912ms for pod "kube-scheduler-addons-086339" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:52:28.401979   74584 pod_ready.go:40] duration metric: took 1.604730154s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:52:28.446357   74584 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 09:52:28.448281   74584 out.go:179] * Done! kubectl is now configured to use "addons-086339" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.493762476Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761991272493732721,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:511388,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8da11cc9-e3e6-42b7-8b25-9fbef0b5863c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.495751222Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a43b13bd-cd83-453e-bfec-194e48df3256 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.495886828Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a43b13bd-cd83-453e-bfec-194e48df3256 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.496396001Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d8f9ab035f10b883f89c331d67218f109856992b9b069efdae0a16a908bf656d,PodSandboxId:ecbb6e0269dbe5206ee40e41cf202e8a0f1fc8985220bca67dd2abcee664753f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761990753121450389,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bd0f0b90-ebd1-434e-86db-7717f59bb0b2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60f64f1e1264248ca86d3e8ea17c90635c9d479311fe8d5ea622b661f0068bd6,PodSandboxId:b2e63f129e7cad5f03427260dc3589db4cecd4b45329bdb1e1023738a84b3985,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761990727410551183,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-g7dks,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a4165ee4-5d09-49d4-a0c1-f663b2084a0d,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.po
rts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a4b410307ca2339c52c3d12763e6b754600ea116c26c1df56bd5b04a1a68661d,PodSandboxId:48e637e86e4493f303489a52457e2b59ba63b33cc608f38bb21f8e651a9e1571,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a
7bf2,State:CONTAINER_EXITED,CreatedAt:1761990712169152493,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dw6sn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 51ccb987-b8f5-42f1-af70-9d22dd5ca2ec,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:764b375ef3791c087e6ad4b248126f0c7c98e6065f6bd3c282044dcc212ac1f4,PodSandboxId:1c83f726dda755d3ed283799c973eeabdf1da173f6f6ce420a3d047efb307a42,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba
112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761990709662174283,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-d7qkm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0111b6b5-409d-4b18-a391-db0a0fbe7882,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ccff636c81da778123aaba73ca1c6a96114c3d9b455724fc184ea7051b61a16,PodSandboxId:ae1c1b106a1ce6fe7752079dd99dd3da08ea5c8417f73c7d2db66281343dd8bc,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761990706331554116,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-p2brt,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 7d9684ff-4d35-4cab-b655-c3fcbbfaa552,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5d4912957560b7536a6c330e413b78d8074dab0b202ba22a5bc327a0cf5f8a2,PodSandboxId:8aac4234df2d12e07c37fb39a1595bd340e7adc1fe2162b211b453851a56a63d,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd34
6a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761990685537208680,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e328fd3e-a381-414d-ba99-1aa6f7f40585,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:323c0222f1b7214ab99931e32355894f2f03f8261792abe4a4d2bb34fcd2969f,PodSandboxId:1c7e949564af5bc80420dc3808d3f2087aa2f9b293627ed59b78902667c1bcef,Metadata:&ContainerMetadata{Name
:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761990655935157577,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-lr4lw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee1e3ae-5d43-4b43-a348-0e04ec066093,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6de230bb7ebf7771aad2c97275ddf43f297877d5aa072670a0a1ea8eb9c2d60f,PodSandboxId:4fbf69bbad2cf19e93c7344344fcc06babe9936500aa5bef352fd41fd
55b694f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761990655486179158,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c394064-33ff-4fd0-a4bc-afb948952ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a27cff89c3381e7c393bcaa8835fcfc8181cd61e7b6ab7d332528c6747943387,PodSandboxId:d7fa84c405309fb1e772e6c659810175defff8a22e42a89197e6b5a5597a8c84,Meta
data:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761990646997219064,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vsbrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a65dae-82f4-4f33-b460-fa45a39b3342,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:260edbddb00ef801ed1131f918ebc64902a7e77ccf06a4ed7c432254423d7b66,PodSandboxId:089a55380f09729b05eee5a252927b0c79db01dc718d6007a08b5689f2ce71c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761990646303679370,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7fck9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a834adcc-b0ec-4cad-8944-bea90a627787,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86586375e770d45d0d93bbb47a93539a88b9dc7cdd8db120d1c9301cf9724986,PodSandboxId:47c204cffec810f2b063e0da736cf9f9a808714639f57abfa3a16da3187f96a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761990633442334233,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f64ac66b49c7412b8fa37d2ea6025670,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"con
tainerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:195a44f107dbd39942c739e79c57d0cb5365ba4acc9d6617ae02ff7fabb66667,PodSandboxId:0780152663a4bf99a793fec09c7dd2ddf6dc4673b89381ad0a9d3bb4248095e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761990633398671407,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c80f54e7a2ffeed9d816c83a1643dee4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.ku
bernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a6a05d5c3b322ab0daa8e0142efedb8b2cd9709809a366e3b02c33252f097e2,PodSandboxId:4303a653e0e77a28ad08502f1313df5bfebd24a17c8e4816f84db5f2d930a571,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761990633395979421,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-086339,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 6ff8e16ad24795a1ca532e7aa16809a1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c9ad62c824f0689e272e9d02d572351e59f9325259ac81e62d0597c48762a5,PodSandboxId:25028e524345d4f110f0887066fc1114742e907055b01a9fcf2cb85f6e770b0b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761990633414977195,Labels:map[string]string{io.kubernetes.container.name: kube
-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 390b611a3c7c50f2133aad0ea70b2107,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a43b13bd-cd83-453e-bfec-194e48df3256 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.542574381Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=daccb402-3088-48ed-997c-96cb295b80e1 name=/runtime.v1.RuntimeService/Version
	Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.542667879Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=daccb402-3088-48ed-997c-96cb295b80e1 name=/runtime.v1.RuntimeService/Version
	Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.543918251Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=582e17ec-5d7c-43b1-a8b9-c5844ca93000 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.545112251Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761991272545079285,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:511388,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=582e17ec-5d7c-43b1-a8b9-c5844ca93000 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.545761692Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8f674aaa-d446-48b1-824e-331da7751d60 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.546023274Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8f674aaa-d446-48b1-824e-331da7751d60 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.546391593Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d8f9ab035f10b883f89c331d67218f109856992b9b069efdae0a16a908bf656d,PodSandboxId:ecbb6e0269dbe5206ee40e41cf202e8a0f1fc8985220bca67dd2abcee664753f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761990753121450389,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bd0f0b90-ebd1-434e-86db-7717f59bb0b2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60f64f1e1264248ca86d3e8ea17c90635c9d479311fe8d5ea622b661f0068bd6,PodSandboxId:b2e63f129e7cad5f03427260dc3589db4cecd4b45329bdb1e1023738a84b3985,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761990727410551183,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-g7dks,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a4165ee4-5d09-49d4-a0c1-f663b2084a0d,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.po
rts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a4b410307ca2339c52c3d12763e6b754600ea116c26c1df56bd5b04a1a68661d,PodSandboxId:48e637e86e4493f303489a52457e2b59ba63b33cc608f38bb21f8e651a9e1571,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a
7bf2,State:CONTAINER_EXITED,CreatedAt:1761990712169152493,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dw6sn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 51ccb987-b8f5-42f1-af70-9d22dd5ca2ec,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:764b375ef3791c087e6ad4b248126f0c7c98e6065f6bd3c282044dcc212ac1f4,PodSandboxId:1c83f726dda755d3ed283799c973eeabdf1da173f6f6ce420a3d047efb307a42,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba
112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761990709662174283,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-d7qkm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0111b6b5-409d-4b18-a391-db0a0fbe7882,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ccff636c81da778123aaba73ca1c6a96114c3d9b455724fc184ea7051b61a16,PodSandboxId:ae1c1b106a1ce6fe7752079dd99dd3da08ea5c8417f73c7d2db66281343dd8bc,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761990706331554116,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-p2brt,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 7d9684ff-4d35-4cab-b655-c3fcbbfaa552,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5d4912957560b7536a6c330e413b78d8074dab0b202ba22a5bc327a0cf5f8a2,PodSandboxId:8aac4234df2d12e07c37fb39a1595bd340e7adc1fe2162b211b453851a56a63d,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd34
6a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761990685537208680,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e328fd3e-a381-414d-ba99-1aa6f7f40585,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:323c0222f1b7214ab99931e32355894f2f03f8261792abe4a4d2bb34fcd2969f,PodSandboxId:1c7e949564af5bc80420dc3808d3f2087aa2f9b293627ed59b78902667c1bcef,Metadata:&ContainerMetadata{Name
:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761990655935157577,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-lr4lw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee1e3ae-5d43-4b43-a348-0e04ec066093,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6de230bb7ebf7771aad2c97275ddf43f297877d5aa072670a0a1ea8eb9c2d60f,PodSandboxId:4fbf69bbad2cf19e93c7344344fcc06babe9936500aa5bef352fd41fd
55b694f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761990655486179158,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c394064-33ff-4fd0-a4bc-afb948952ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a27cff89c3381e7c393bcaa8835fcfc8181cd61e7b6ab7d332528c6747943387,PodSandboxId:d7fa84c405309fb1e772e6c659810175defff8a22e42a89197e6b5a5597a8c84,Meta
data:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761990646997219064,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vsbrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a65dae-82f4-4f33-b460-fa45a39b3342,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:260edbddb00ef801ed1131f918ebc64902a7e77ccf06a4ed7c432254423d7b66,PodSandboxId:089a55380f09729b05eee5a252927b0c79db01dc718d6007a08b5689f2ce71c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761990646303679370,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7fck9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a834adcc-b0ec-4cad-8944-bea90a627787,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86586375e770d45d0d93bbb47a93539a88b9dc7cdd8db120d1c9301cf9724986,PodSandboxId:47c204cffec810f2b063e0da736cf9f9a808714639f57abfa3a16da3187f96a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761990633442334233,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f64ac66b49c7412b8fa37d2ea6025670,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"con
tainerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:195a44f107dbd39942c739e79c57d0cb5365ba4acc9d6617ae02ff7fabb66667,PodSandboxId:0780152663a4bf99a793fec09c7dd2ddf6dc4673b89381ad0a9d3bb4248095e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761990633398671407,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c80f54e7a2ffeed9d816c83a1643dee4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.ku
bernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a6a05d5c3b322ab0daa8e0142efedb8b2cd9709809a366e3b02c33252f097e2,PodSandboxId:4303a653e0e77a28ad08502f1313df5bfebd24a17c8e4816f84db5f2d930a571,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761990633395979421,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-086339,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 6ff8e16ad24795a1ca532e7aa16809a1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c9ad62c824f0689e272e9d02d572351e59f9325259ac81e62d0597c48762a5,PodSandboxId:25028e524345d4f110f0887066fc1114742e907055b01a9fcf2cb85f6e770b0b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761990633414977195,Labels:map[string]string{io.kubernetes.container.name: kube
-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 390b611a3c7c50f2133aad0ea70b2107,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8f674aaa-d446-48b1-824e-331da7751d60 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.584610395Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8adb3d4a-532d-437f-8010-bd862eb9bb16 name=/runtime.v1.RuntimeService/Version
	Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.584744468Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8adb3d4a-532d-437f-8010-bd862eb9bb16 name=/runtime.v1.RuntimeService/Version
	Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.586071799Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7a8ad1f8-747b-495f-8cab-a8953c5338a1 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.587449550Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761991272587421814,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:511388,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7a8ad1f8-747b-495f-8cab-a8953c5338a1 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.588396765Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=41864b4a-9076-48c6-89c0-760957b1a65d name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.588488560Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=41864b4a-9076-48c6-89c0-760957b1a65d name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.588929638Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d8f9ab035f10b883f89c331d67218f109856992b9b069efdae0a16a908bf656d,PodSandboxId:ecbb6e0269dbe5206ee40e41cf202e8a0f1fc8985220bca67dd2abcee664753f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761990753121450389,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bd0f0b90-ebd1-434e-86db-7717f59bb0b2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60f64f1e1264248ca86d3e8ea17c90635c9d479311fe8d5ea622b661f0068bd6,PodSandboxId:b2e63f129e7cad5f03427260dc3589db4cecd4b45329bdb1e1023738a84b3985,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761990727410551183,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-g7dks,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a4165ee4-5d09-49d4-a0c1-f663b2084a0d,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.po
rts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a4b410307ca2339c52c3d12763e6b754600ea116c26c1df56bd5b04a1a68661d,PodSandboxId:48e637e86e4493f303489a52457e2b59ba63b33cc608f38bb21f8e651a9e1571,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a
7bf2,State:CONTAINER_EXITED,CreatedAt:1761990712169152493,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dw6sn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 51ccb987-b8f5-42f1-af70-9d22dd5ca2ec,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:764b375ef3791c087e6ad4b248126f0c7c98e6065f6bd3c282044dcc212ac1f4,PodSandboxId:1c83f726dda755d3ed283799c973eeabdf1da173f6f6ce420a3d047efb307a42,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba
112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761990709662174283,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-d7qkm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0111b6b5-409d-4b18-a391-db0a0fbe7882,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ccff636c81da778123aaba73ca1c6a96114c3d9b455724fc184ea7051b61a16,PodSandboxId:ae1c1b106a1ce6fe7752079dd99dd3da08ea5c8417f73c7d2db66281343dd8bc,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761990706331554116,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-p2brt,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 7d9684ff-4d35-4cab-b655-c3fcbbfaa552,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5d4912957560b7536a6c330e413b78d8074dab0b202ba22a5bc327a0cf5f8a2,PodSandboxId:8aac4234df2d12e07c37fb39a1595bd340e7adc1fe2162b211b453851a56a63d,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd34
6a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761990685537208680,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e328fd3e-a381-414d-ba99-1aa6f7f40585,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:323c0222f1b7214ab99931e32355894f2f03f8261792abe4a4d2bb34fcd2969f,PodSandboxId:1c7e949564af5bc80420dc3808d3f2087aa2f9b293627ed59b78902667c1bcef,Metadata:&ContainerMetadata{Name
:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761990655935157577,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-lr4lw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee1e3ae-5d43-4b43-a348-0e04ec066093,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6de230bb7ebf7771aad2c97275ddf43f297877d5aa072670a0a1ea8eb9c2d60f,PodSandboxId:4fbf69bbad2cf19e93c7344344fcc06babe9936500aa5bef352fd41fd
55b694f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761990655486179158,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c394064-33ff-4fd0-a4bc-afb948952ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a27cff89c3381e7c393bcaa8835fcfc8181cd61e7b6ab7d332528c6747943387,PodSandboxId:d7fa84c405309fb1e772e6c659810175defff8a22e42a89197e6b5a5597a8c84,Meta
data:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761990646997219064,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vsbrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a65dae-82f4-4f33-b460-fa45a39b3342,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:260edbddb00ef801ed1131f918ebc64902a7e77ccf06a4ed7c432254423d7b66,PodSandboxId:089a55380f09729b05eee5a252927b0c79db01dc718d6007a08b5689f2ce71c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761990646303679370,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7fck9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a834adcc-b0ec-4cad-8944-bea90a627787,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86586375e770d45d0d93bbb47a93539a88b9dc7cdd8db120d1c9301cf9724986,PodSandboxId:47c204cffec810f2b063e0da736cf9f9a808714639f57abfa3a16da3187f96a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761990633442334233,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f64ac66b49c7412b8fa37d2ea6025670,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"con
tainerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:195a44f107dbd39942c739e79c57d0cb5365ba4acc9d6617ae02ff7fabb66667,PodSandboxId:0780152663a4bf99a793fec09c7dd2ddf6dc4673b89381ad0a9d3bb4248095e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761990633398671407,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c80f54e7a2ffeed9d816c83a1643dee4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.ku
bernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a6a05d5c3b322ab0daa8e0142efedb8b2cd9709809a366e3b02c33252f097e2,PodSandboxId:4303a653e0e77a28ad08502f1313df5bfebd24a17c8e4816f84db5f2d930a571,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761990633395979421,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-086339,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 6ff8e16ad24795a1ca532e7aa16809a1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c9ad62c824f0689e272e9d02d572351e59f9325259ac81e62d0597c48762a5,PodSandboxId:25028e524345d4f110f0887066fc1114742e907055b01a9fcf2cb85f6e770b0b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761990633414977195,Labels:map[string]string{io.kubernetes.container.name: kube
-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 390b611a3c7c50f2133aad0ea70b2107,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=41864b4a-9076-48c6-89c0-760957b1a65d name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.633391948Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4d464116-3916-4238-bc93-98f0081cfb4b name=/runtime.v1.RuntimeService/Version
	Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.633484670Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4d464116-3916-4238-bc93-98f0081cfb4b name=/runtime.v1.RuntimeService/Version
	Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.635335971Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d10373c3-1aec-446d-93a9-5052887d0261 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.637233621Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761991272637199543,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:511388,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d10373c3-1aec-446d-93a9-5052887d0261 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.638095967Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4b8f6eb1-b2c4-4f59-b21b-2e0cdd5f9958 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.638184482Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4b8f6eb1-b2c4-4f59-b21b-2e0cdd5f9958 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:01:12 addons-086339 crio[826]: time="2025-11-01 10:01:12.638480061Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d8f9ab035f10b883f89c331d67218f109856992b9b069efdae0a16a908bf656d,PodSandboxId:ecbb6e0269dbe5206ee40e41cf202e8a0f1fc8985220bca67dd2abcee664753f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761990753121450389,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bd0f0b90-ebd1-434e-86db-7717f59bb0b2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60f64f1e1264248ca86d3e8ea17c90635c9d479311fe8d5ea622b661f0068bd6,PodSandboxId:b2e63f129e7cad5f03427260dc3589db4cecd4b45329bdb1e1023738a84b3985,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761990727410551183,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-g7dks,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a4165ee4-5d09-49d4-a0c1-f663b2084a0d,},Annotations:map[string]string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.po
rts: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:a4b410307ca2339c52c3d12763e6b754600ea116c26c1df56bd5b04a1a68661d,PodSandboxId:48e637e86e4493f303489a52457e2b59ba63b33cc608f38bb21f8e651a9e1571,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a
7bf2,State:CONTAINER_EXITED,CreatedAt:1761990712169152493,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dw6sn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 51ccb987-b8f5-42f1-af70-9d22dd5ca2ec,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:764b375ef3791c087e6ad4b248126f0c7c98e6065f6bd3c282044dcc212ac1f4,PodSandboxId:1c83f726dda755d3ed283799c973eeabdf1da173f6f6ce420a3d047efb307a42,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba
112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761990709662174283,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-d7qkm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0111b6b5-409d-4b18-a391-db0a0fbe7882,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ccff636c81da778123aaba73ca1c6a96114c3d9b455724fc184ea7051b61a16,PodSandboxId:ae1c1b106a1ce6fe7752079dd99dd3da08ea5c8417f73c7d2db66281343dd8bc,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761990706331554116,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-p2brt,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 7d9684ff-4d35-4cab-b655-c3fcbbfaa552,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5d4912957560b7536a6c330e413b78d8074dab0b202ba22a5bc327a0cf5f8a2,PodSandboxId:8aac4234df2d12e07c37fb39a1595bd340e7adc1fe2162b211b453851a56a63d,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd34
6a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761990685537208680,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e328fd3e-a381-414d-ba99-1aa6f7f40585,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:323c0222f1b7214ab99931e32355894f2f03f8261792abe4a4d2bb34fcd2969f,PodSandboxId:1c7e949564af5bc80420dc3808d3f2087aa2f9b293627ed59b78902667c1bcef,Metadata:&ContainerMetadata{Name
:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761990655935157577,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-lr4lw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee1e3ae-5d43-4b43-a348-0e04ec066093,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6de230bb7ebf7771aad2c97275ddf43f297877d5aa072670a0a1ea8eb9c2d60f,PodSandboxId:4fbf69bbad2cf19e93c7344344fcc06babe9936500aa5bef352fd41fd
55b694f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761990655486179158,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c394064-33ff-4fd0-a4bc-afb948952ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a27cff89c3381e7c393bcaa8835fcfc8181cd61e7b6ab7d332528c6747943387,PodSandboxId:d7fa84c405309fb1e772e6c659810175defff8a22e42a89197e6b5a5597a8c84,Meta
data:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761990646997219064,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vsbrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a65dae-82f4-4f33-b460-fa45a39b3342,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:260edbddb00ef801ed1131f918ebc64902a7e77ccf06a4ed7c432254423d7b66,PodSandboxId:089a55380f09729b05eee5a252927b0c79db01dc718d6007a08b5689f2ce71c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761990646303679370,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7fck9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a834adcc-b0ec-4cad-8944-bea90a627787,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86586375e770d45d0d93bbb47a93539a88b9dc7cdd8db120d1c9301cf9724986,PodSandboxId:47c204cffec810f2b063e0da736cf9f9a808714639f57abfa3a16da3187f96a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761990633442334233,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f64ac66b49c7412b8fa37d2ea6025670,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"con
tainerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:195a44f107dbd39942c739e79c57d0cb5365ba4acc9d6617ae02ff7fabb66667,PodSandboxId:0780152663a4bf99a793fec09c7dd2ddf6dc4673b89381ad0a9d3bb4248095e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761990633398671407,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c80f54e7a2ffeed9d816c83a1643dee4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.ku
bernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a6a05d5c3b322ab0daa8e0142efedb8b2cd9709809a366e3b02c33252f097e2,PodSandboxId:4303a653e0e77a28ad08502f1313df5bfebd24a17c8e4816f84db5f2d930a571,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761990633395979421,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-086339,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: 6ff8e16ad24795a1ca532e7aa16809a1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c9ad62c824f0689e272e9d02d572351e59f9325259ac81e62d0597c48762a5,PodSandboxId:25028e524345d4f110f0887066fc1114742e907055b01a9fcf2cb85f6e770b0b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761990633414977195,Labels:map[string]string{io.kubernetes.container.name: kube
-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 390b611a3c7c50f2133aad0ea70b2107,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4b8f6eb1-b2c4-4f59-b21b-2e0cdd5f9958 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d8f9ab035f10b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          8 minutes ago       Running             busybox                   0                   ecbb6e0269dbe       busybox
	60f64f1e12642       registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd             9 minutes ago       Running             controller                0                   b2e63f129e7ca       ingress-nginx-controller-675c5ddd98-g7dks
	a4b410307ca23       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39   9 minutes ago       Exited              patch                     0                   48e637e86e449       ingress-nginx-admission-patch-dw6sn
	764b375ef3791       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39   9 minutes ago       Exited              create                    0                   1c83f726dda75       ingress-nginx-admission-create-d7qkm
	6ccff636c81da       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb            9 minutes ago       Running             gadget                    0                   ae1c1b106a1ce       gadget-p2brt
	e5d4912957560       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               9 minutes ago       Running             minikube-ingress-dns      0                   8aac4234df2d1       kube-ingress-dns-minikube
	323c0222f1b72       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     10 minutes ago      Running             amd-gpu-device-plugin     0                   1c7e949564af5       amd-gpu-device-plugin-lr4lw
	6de230bb7ebf7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             10 minutes ago      Running             storage-provisioner       0                   4fbf69bbad2cf       storage-provisioner
	a27cff89c3381       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             10 minutes ago      Running             coredns                   0                   d7fa84c405309       coredns-66bc5c9577-vsbrs
	260edbddb00ef       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                             10 minutes ago      Running             kube-proxy                0                   089a55380f097       kube-proxy-7fck9
	86586375e770d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                             10 minutes ago      Running             kube-scheduler            0                   47c204cffec81       kube-scheduler-addons-086339
	e1c9ad62c824f       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                             10 minutes ago      Running             kube-apiserver            0                   25028e524345d       kube-apiserver-addons-086339
	195a44f107dbd       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             10 minutes ago      Running             etcd                      0                   0780152663a4b       etcd-addons-086339
	9a6a05d5c3b32       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                             10 minutes ago      Running             kube-controller-manager   0                   4303a653e0e77       kube-controller-manager-addons-086339
	
	
	==> coredns [a27cff89c3381e7c393bcaa8835fcfc8181cd61e7b6ab7d332528c6747943387] <==
	[INFO] 10.244.0.8:46984 - 64533 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000141653s
	[INFO] 10.244.0.8:46984 - 26572 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000122796s
	[INFO] 10.244.0.8:46984 - 13929 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000122328s
	[INFO] 10.244.0.8:46984 - 50125 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000111517s
	[INFO] 10.244.0.8:46984 - 28460 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000076823s
	[INFO] 10.244.0.8:46984 - 37293 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000357436s
	[INFO] 10.244.0.8:46984 - 35576 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000074841s
	[INFO] 10.244.0.8:47197 - 56588 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000121682s
	[INFO] 10.244.0.8:47197 - 56863 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000074546s
	[INFO] 10.244.0.8:55042 - 52218 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00018264s
	[INFO] 10.244.0.8:55042 - 52511 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000079606s
	[INFO] 10.244.0.8:46708 - 46443 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000066375s
	[INFO] 10.244.0.8:46708 - 46765 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000066983s
	[INFO] 10.244.0.8:59900 - 32652 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000207279s
	[INFO] 10.244.0.8:59900 - 32872 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000078309s
	[INFO] 10.244.0.23:50316 - 52228 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001915683s
	[INFO] 10.244.0.23:47612 - 63606 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.002354882s
	[INFO] 10.244.0.23:53727 - 34179 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000138277s
	[INFO] 10.244.0.23:43312 - 5456 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000125706s
	[INFO] 10.244.0.23:34742 - 50233 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000105505s
	[INFO] 10.244.0.23:42706 - 32458 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000148964s
	[INFO] 10.244.0.23:47433 - 16041 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00404755s
	[INFO] 10.244.0.23:43796 - 36348 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.003930977s
	[INFO] 10.244.0.28:59610 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000657818s
	[INFO] 10.244.0.28:58478 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000385159s
	
	
	==> describe nodes <==
	Name:               addons-086339
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-086339
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=addons-086339
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_50_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-086339
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:50:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-086339
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:01:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:54:15 +0000   Sat, 01 Nov 2025 09:50:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:54:15 +0000   Sat, 01 Nov 2025 09:50:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:54:15 +0000   Sat, 01 Nov 2025 09:50:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:54:15 +0000   Sat, 01 Nov 2025 09:50:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.58
	  Hostname:    addons-086339
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 a0be334a213a4e9abad36168cb6c4d93
	  System UUID:                a0be334a-213a-4e9a-bad3-6168cb6c4d93
	  Boot ID:                    f5f61220-a436-4e42-9f0c-21fc51d403ab
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m43s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m1s
	  default                     task-pv-pod                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m57s
	  default                     test-local-path                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m24s
	  gadget                      gadget-p2brt                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-g7dks    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         10m
	  kube-system                 amd-gpu-device-plugin-lr4lw                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-vsbrs                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     10m
	  kube-system                 etcd-addons-086339                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         10m
	  kube-system                 kube-apiserver-addons-086339                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-addons-086339        200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-7fck9                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-addons-086339                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 10m   kube-proxy       
	  Normal  Starting                 10m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m   kubelet          Node addons-086339 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m   kubelet          Node addons-086339 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m   kubelet          Node addons-086339 status is now: NodeHasSufficientPID
	  Normal  NodeReady                10m   kubelet          Node addons-086339 status is now: NodeReady
	  Normal  RegisteredNode           10m   node-controller  Node addons-086339 event: Registered Node addons-086339 in Controller
	
	
	==> dmesg <==
	[  +0.026933] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.422693] kauditd_printk_skb: 282 callbacks suppressed
	[  +0.000178] kauditd_printk_skb: 179 callbacks suppressed
	[Nov 1 09:51] kauditd_printk_skb: 480 callbacks suppressed
	[ +10.588247] kauditd_printk_skb: 85 callbacks suppressed
	[  +8.893680] kauditd_printk_skb: 32 callbacks suppressed
	[  +4.164899] kauditd_printk_skb: 11 callbacks suppressed
	[ +11.079506] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.550370] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.067618] kauditd_printk_skb: 131 callbacks suppressed
	[  +2.164833] kauditd_printk_skb: 126 callbacks suppressed
	[Nov 1 09:52] kauditd_printk_skb: 130 callbacks suppressed
	[  +6.663248] kauditd_printk_skb: 68 callbacks suppressed
	[  +6.258025] kauditd_printk_skb: 26 callbacks suppressed
	[  +0.000041] kauditd_printk_skb: 2 callbacks suppressed
	[ +13.077918] kauditd_printk_skb: 41 callbacks suppressed
	[  +0.000038] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.048376] kauditd_printk_skb: 98 callbacks suppressed
	[  +0.000043] kauditd_printk_skb: 78 callbacks suppressed
	[Nov 1 09:53] kauditd_printk_skb: 58 callbacks suppressed
	[  +4.089930] kauditd_printk_skb: 42 callbacks suppressed
	[ +31.556122] kauditd_printk_skb: 74 callbacks suppressed
	[Nov 1 09:54] kauditd_printk_skb: 80 callbacks suppressed
	[ +15.872282] kauditd_printk_skb: 22 callbacks suppressed
	[Nov 1 09:59] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [195a44f107dbd39942c739e79c57d0cb5365ba4acc9d6617ae02ff7fabb66667] <==
	{"level":"warn","ts":"2025-11-01T09:51:53.829077Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T09:51:53.520485Z","time spent":"307.914654ms","remote":"127.0.0.1:50442","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4224,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/jobs/ingress-nginx/ingress-nginx-admission-patch\" mod_revision:715 > success:<request_put:<key:\"/registry/jobs/ingress-nginx/ingress-nginx-admission-patch\" value_size:4158 >> failure:<request_range:<key:\"/registry/jobs/ingress-nginx/ingress-nginx-admission-patch\" > >"}
	{"level":"warn","ts":"2025-11-01T09:51:53.837101Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"205.85086ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:51:53.837158Z","caller":"traceutil/trace.go:172","msg":"trace[1726047932] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1054; }","duration":"205.918617ms","start":"2025-11-01T09:51:53.631230Z","end":"2025-11-01T09:51:53.837149Z","steps":["trace[1726047932] 'agreement among raft nodes before linearized reading'  (duration: 205.832252ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:51:53.837332Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"160.114488ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:51:53.837352Z","caller":"traceutil/trace.go:172","msg":"trace[1767754287] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1054; }","duration":"160.137708ms","start":"2025-11-01T09:51:53.677208Z","end":"2025-11-01T09:51:53.837346Z","steps":["trace[1767754287] 'agreement among raft nodes before linearized reading'  (duration: 160.097095ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:51:53.837427Z","caller":"traceutil/trace.go:172","msg":"trace[169582400] transaction","detail":"{read_only:false; response_revision:1055; number_of_response:1; }","duration":"313.012286ms","start":"2025-11-01T09:51:53.524403Z","end":"2025-11-01T09:51:53.837415Z","steps":["trace[169582400] 'process raft request'  (duration: 312.936714ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:51:53.837521Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T09:51:53.524385Z","time spent":"313.094727ms","remote":"127.0.0.1:50348","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4615,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-dw6sn\" mod_revision:1047 > success:<request_put:<key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-dw6sn\" value_size:4543 >> failure:<request_range:<key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-dw6sn\" > >"}
	{"level":"warn","ts":"2025-11-01T09:51:53.837540Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"187.263588ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:51:53.837560Z","caller":"traceutil/trace.go:172","msg":"trace[1222634] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1055; }","duration":"187.33ms","start":"2025-11-01T09:51:53.650224Z","end":"2025-11-01T09:51:53.837554Z","steps":["trace[1222634] 'agreement among raft nodes before linearized reading'  (duration: 187.245695ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:51:57.997674Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"105.945423ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:51:57.998286Z","caller":"traceutil/trace.go:172","msg":"trace[902941296] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1089; }","duration":"106.560193ms","start":"2025-11-01T09:51:57.891708Z","end":"2025-11-01T09:51:57.998268Z","steps":["trace[902941296] 'range keys from in-memory index tree'  (duration: 105.862666ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:52:04.319796Z","caller":"traceutil/trace.go:172","msg":"trace[427956117] transaction","detail":"{read_only:false; response_revision:1125; number_of_response:1; }","duration":"140.175418ms","start":"2025-11-01T09:52:04.179583Z","end":"2025-11-01T09:52:04.319759Z","steps":["trace[427956117] 'process raft request'  (duration: 140.063245ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:52:08.551381Z","caller":"traceutil/trace.go:172","msg":"trace[603420838] transaction","detail":"{read_only:false; response_revision:1143; number_of_response:1; }","duration":"197.437726ms","start":"2025-11-01T09:52:08.353928Z","end":"2025-11-01T09:52:08.551366Z","steps":["trace[603420838] 'process raft request'  (duration: 197.339599ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:52:12.328289Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"151.65917ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:52:12.328359Z","caller":"traceutil/trace.go:172","msg":"trace[1819451364] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1156; }","duration":"151.738106ms","start":"2025-11-01T09:52:12.176611Z","end":"2025-11-01T09:52:12.328349Z","steps":["trace[1819451364] 'range keys from in-memory index tree'  (duration: 151.603213ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:52:19.593365Z","caller":"traceutil/trace.go:172","msg":"trace[1734006161] transaction","detail":"{read_only:false; response_revision:1195; number_of_response:1; }","duration":"230.197039ms","start":"2025-11-01T09:52:19.363155Z","end":"2025-11-01T09:52:19.593352Z","steps":["trace[1734006161] 'process raft request'  (duration: 230.054763ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:53:03.073159Z","caller":"traceutil/trace.go:172","msg":"trace[844100605] linearizableReadLoop","detail":"{readStateIndex:1471; appliedIndex:1471; }","duration":"184.287063ms","start":"2025-11-01T09:53:02.888805Z","end":"2025-11-01T09:53:03.073092Z","steps":["trace[844100605] 'read index received'  (duration: 184.274805ms)","trace[844100605] 'applied index is now lower than readState.Index'  (duration: 11.185µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T09:53:03.073336Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"184.514416ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:53:03.073356Z","caller":"traceutil/trace.go:172","msg":"trace[379602539] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1424; }","duration":"184.548883ms","start":"2025-11-01T09:53:02.888802Z","end":"2025-11-01T09:53:03.073351Z","steps":["trace[379602539] 'agreement among raft nodes before linearized reading'  (duration: 184.47499ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:53:03.073440Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"173.732425ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2025-11-01T09:53:03.073464Z","caller":"traceutil/trace.go:172","msg":"trace[1841159583] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1424; }","duration":"173.762443ms","start":"2025-11-01T09:53:02.899696Z","end":"2025-11-01T09:53:03.073458Z","steps":["trace[1841159583] 'agreement among raft nodes before linearized reading'  (duration: 173.676648ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:53:03.073212Z","caller":"traceutil/trace.go:172","msg":"trace[990398784] transaction","detail":"{read_only:false; response_revision:1424; number_of_response:1; }","duration":"298.156963ms","start":"2025-11-01T09:53:02.775044Z","end":"2025-11-01T09:53:03.073201Z","steps":["trace[990398784] 'process raft request'  (duration: 298.073448ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:00:35.435145Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1806}
	{"level":"info","ts":"2025-11-01T10:00:35.507318Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1806,"took":"68.899276ms","hash":945816022,"current-db-size-bytes":6217728,"current-db-size":"6.2 MB","current-db-size-in-use-bytes":4050944,"current-db-size-in-use":"4.1 MB"}
	{"level":"info","ts":"2025-11-01T10:00:35.507380Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":945816022,"revision":1806,"compact-revision":-1}
	
	
	==> kernel <==
	 10:01:13 up 11 min,  0 users,  load average: 0.64, 0.66, 0.55
	Linux addons-086339 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Oct 28 16:58:05 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [e1c9ad62c824f0689e272e9d02d572351e59f9325259ac81e62d0597c48762a5] <==
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1101 09:51:45.526596       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.150.255:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.150.255:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.150.255:443: connect: connection refused" logger="UnhandledError"
	E1101 09:51:45.531959       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.150.255:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.150.255:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.150.255:443: connect: connection refused" logger="UnhandledError"
	I1101 09:51:45.647009       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1101 09:52:39.519537       1 conn.go:339] Error on socket receive: read tcp 192.168.39.58:8443->192.168.39.1:41530: use of closed network connection
	I1101 09:52:48.989373       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.98.57.119"}
	I1101 09:53:11.180343       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1101 09:53:11.353371       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.7.153"}
	I1101 09:53:46.542354       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1101 09:59:18.596497       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1101 09:59:18.597023       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1101 09:59:18.641644       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1101 09:59:18.641705       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1101 09:59:18.642882       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1101 09:59:18.642938       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1101 09:59:18.667098       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1101 09:59:18.667271       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1101 09:59:18.699786       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1101 09:59:18.699898       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1101 09:59:19.643429       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1101 09:59:19.701335       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1101 09:59:19.721247       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1101 10:00:37.179156       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [9a6a05d5c3b322ab0daa8e0142efedb8b2cd9709809a366e3b02c33252f097e2] <==
	E1101 09:59:35.978218       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 09:59:35.979239       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 09:59:38.341797       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 09:59:38.343162       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 09:59:41.476578       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 09:59:41.477757       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I1101 09:59:44.086990       1 reconciler.go:364] "attacherDetacher.AttachVolume started" logger="persistentvolume-attach-detach-controller" volumeName="kubernetes.io/csi/hostpath.csi.k8s.io^95d0b596-b708-11f0-979a-ce1acd12cba3" nodeName="addons-086339" scheduledPods=["default/task-pv-pod"]
	I1101 09:59:44.352321       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1101 09:59:44.352374       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:59:44.421472       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1101 09:59:44.421649       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1101 09:59:59.548631       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 09:59:59.550334       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 10:00:00.894580       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 10:00:00.895667       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 10:00:01.800596       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 10:00:01.801785       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 10:00:29.194041       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 10:00:29.195428       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 10:00:38.263769       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 10:00:38.265085       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 10:00:46.598339       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 10:00:46.599423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 10:00:59.811512       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 10:00:59.813647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [260edbddb00ef801ed1131f918ebc64902a7e77ccf06a4ed7c432254423d7b66] <==
	I1101 09:50:47.380388       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:50:47.481009       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:50:47.481962       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.58"]
	E1101 09:50:47.483258       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:50:47.618974       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1101 09:50:47.619028       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 09:50:47.619055       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:50:47.646432       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:50:47.648118       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:50:47.648153       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:50:47.664129       1 config.go:309] "Starting node config controller"
	I1101 09:50:47.666955       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:50:47.666969       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:50:47.665033       1 config.go:200] "Starting service config controller"
	I1101 09:50:47.666978       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:50:47.667949       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:50:47.667987       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:50:47.668010       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:50:47.668021       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:50:47.767136       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:50:47.771739       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 09:50:47.772010       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [86586375e770d45d0d93bbb47a93539a88b9dc7cdd8db120d1c9301cf9724986] <==
	E1101 09:50:37.221936       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:50:37.222056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:50:37.222116       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:50:37.222130       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:50:37.225229       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:50:37.225317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:50:37.225378       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:50:37.227418       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:50:37.227443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:50:37.227647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 09:50:37.227768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 09:50:37.227996       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:50:38.054220       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:50:38.064603       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:50:38.082458       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:50:38.180400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:50:38.210958       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:50:38.220410       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:50:38.222634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:50:38.324209       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:50:38.347306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:50:38.391541       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:50:38.445129       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:50:38.559973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1101 09:50:41.263288       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 10:00:20 addons-086339 kubelet[1515]: E1101 10:00:20.397760    1515 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761991220397149426  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
	Nov 01 10:00:20 addons-086339 kubelet[1515]: E1101 10:00:20.397784    1515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761991220397149426  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
	Nov 01 10:00:30 addons-086339 kubelet[1515]: E1101 10:00:30.401057    1515 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761991230400557931  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
	Nov 01 10:00:30 addons-086339 kubelet[1515]: E1101 10:00:30.401083    1515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761991230400557931  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
	Nov 01 10:00:31 addons-086339 kubelet[1515]: E1101 10:00:31.027702    1515 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="80f28ba1-b1ac-4f7a-9a35-3fd834d8e54e"
	Nov 01 10:00:32 addons-086339 kubelet[1515]: E1101 10:00:32.026691    1515 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="eb0ec6cf-d05a-4514-92a8-21a6ef18f433"
	Nov 01 10:00:40 addons-086339 kubelet[1515]: E1101 10:00:40.403919    1515 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761991240403478698  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
	Nov 01 10:00:40 addons-086339 kubelet[1515]: E1101 10:00:40.403948    1515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761991240403478698  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
	Nov 01 10:00:41 addons-086339 kubelet[1515]: W1101 10:00:41.484334    1515 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "/var/lib/kubelet/plugins/csi-hostpath/csi.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/lib/kubelet/plugins/csi-hostpath/csi.sock: connect: connection refused"
	Nov 01 10:00:42 addons-086339 kubelet[1515]: E1101 10:00:42.028037    1515 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\": ErrImagePull: reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/nginx" podUID="80f28ba1-b1ac-4f7a-9a35-3fd834d8e54e"
	Nov 01 10:00:47 addons-086339 kubelet[1515]: E1101 10:00:47.026330    1515 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="eb0ec6cf-d05a-4514-92a8-21a6ef18f433"
	Nov 01 10:00:50 addons-086339 kubelet[1515]: E1101 10:00:50.407135    1515 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761991250406393908  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
	Nov 01 10:00:50 addons-086339 kubelet[1515]: E1101 10:00:50.407379    1515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761991250406393908  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
	Nov 01 10:00:52 addons-086339 kubelet[1515]: E1101 10:00:52.540918    1515 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from image index: reading manifest sha256:00baf5736376036ea4bc1a1c075784fc98a79186604d5d41305cd9b428b3b737 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="busybox:stable"
	Nov 01 10:00:52 addons-086339 kubelet[1515]: E1101 10:00:52.540992    1515 kuberuntime_image.go:43] "Failed to pull image" err="fetching target platform image selected from image index: reading manifest sha256:00baf5736376036ea4bc1a1c075784fc98a79186604d5d41305cd9b428b3b737 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="busybox:stable"
	Nov 01 10:00:52 addons-086339 kubelet[1515]: E1101 10:00:52.541273    1515 kuberuntime_manager.go:1449] "Unhandled Error" err="container busybox start failed in pod test-local-path_default(bb9a245d-f766-4ca6-8de9-96b056a9cab4): ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:00baf5736376036ea4bc1a1c075784fc98a79186604d5d41305cd9b428b3b737 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 01 10:00:52 addons-086339 kubelet[1515]: E1101 10:00:52.541325    1515 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ErrImagePull: \"fetching target platform image selected from image index: reading manifest sha256:00baf5736376036ea4bc1a1c075784fc98a79186604d5d41305cd9b428b3b737 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="bb9a245d-f766-4ca6-8de9-96b056a9cab4"
	Nov 01 10:00:59 addons-086339 kubelet[1515]: E1101 10:00:59.026609    1515 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="eb0ec6cf-d05a-4514-92a8-21a6ef18f433"
	Nov 01 10:01:00 addons-086339 kubelet[1515]: E1101 10:01:00.412783    1515 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761991260412229847  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
	Nov 01 10:01:00 addons-086339 kubelet[1515]: E1101 10:01:00.412909    1515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761991260412229847  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
	Nov 01 10:01:06 addons-086339 kubelet[1515]: E1101 10:01:06.031030    1515 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:00baf5736376036ea4bc1a1c075784fc98a79186604d5d41305cd9b428b3b737 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="bb9a245d-f766-4ca6-8de9-96b056a9cab4"
	Nov 01 10:01:07 addons-086339 kubelet[1515]: I1101 10:01:07.026949    1515 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-lr4lw" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 10:01:07 addons-086339 kubelet[1515]: I1101 10:01:07.027118    1515 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 10:01:10 addons-086339 kubelet[1515]: E1101 10:01:10.416134    1515 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761991270415643523  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
	Nov 01 10:01:10 addons-086339 kubelet[1515]: E1101 10:01:10.416433    1515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761991270415643523  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
	
	
	==> storage-provisioner [6de230bb7ebf7771aad2c97275ddf43f297877d5aa072670a0a1ea8eb9c2d60f] <==
	W1101 10:00:47.730533       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:00:49.734700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:00:49.739899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:00:51.744908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:00:51.750173       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:00:53.755077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:00:53.765866       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:00:55.770215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:00:55.780157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:00:57.784138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:00:57.789302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:00:59.794506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:00:59.800166       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:01:01.804246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:01:01.813039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:01:03.817324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:01:03.823977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:01:05.827893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:01:05.836170       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:01:07.839065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:01:07.845342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:01:09.849236       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:01:09.856680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:01:11.863958       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:01:11.871501       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-086339 -n addons-086339
helpers_test.go:269: (dbg) Run:  kubectl --context addons-086339 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-d7qkm ingress-nginx-admission-patch-dw6sn
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-086339 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-d7qkm ingress-nginx-admission-patch-dw6sn
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-086339 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-d7qkm ingress-nginx-admission-patch-dw6sn: exit status 1 (92.976172ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-086339/192.168.39.58
	Start Time:       Sat, 01 Nov 2025 09:53:11 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sggwf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-sggwf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  8m2s                  default-scheduler  Successfully assigned default/nginx to addons-086339
	  Warning  Failed     5m6s                  kubelet            Failed to pull image "docker.io/nginx:alpine": fetching target platform image selected from image index: reading manifest sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     102s (x3 over 6m55s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     102s (x4 over 6m55s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    31s (x11 over 6m54s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     31s (x11 over 6m54s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    17s (x5 over 8m2s)    kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-086339/192.168.39.58
	Start Time:       Sat, 01 Nov 2025 09:53:15 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.30
	IPs:
	  IP:  10.244.0.30
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x27kl (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-x27kl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  7m58s                 default-scheduler  Successfully assigned default/task-pv-pod to addons-086339
	  Warning  Failed     2m45s                 kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    119s (x4 over 7m57s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     71s (x3 over 6m23s)   kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     71s (x4 over 6m23s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    14s (x9 over 6m23s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     14s (x9 over 6m23s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-086339/192.168.39.58
	Start Time:       Sat, 01 Nov 2025 09:52:55 +0000
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         busybox:stable
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t5c9x (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-t5c9x:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  8m18s                  default-scheduler  Successfully assigned default/test-local-path to addons-086339
	  Warning  Failed     2m14s (x3 over 5m52s)  kubelet            Failed to pull image "busybox:stable": reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    52s (x5 over 8m15s)    kubelet            Pulling image "busybox:stable"
	  Warning  Failed     21s (x2 over 7m29s)    kubelet            Failed to pull image "busybox:stable": fetching target platform image selected from image index: reading manifest sha256:00baf5736376036ea4bc1a1c075784fc98a79186604d5d41305cd9b428b3b737 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     21s (x5 over 7m29s)    kubelet            Error: ErrImagePull
	  Normal   BackOff    7s (x11 over 7m29s)    kubelet            Back-off pulling image "busybox:stable"
	  Warning  Failed     7s (x11 over 7m29s)    kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-d7qkm" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-dw6sn" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-086339 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-d7qkm ingress-nginx-admission-patch-dw6sn: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-086339 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-086339 addons disable ingress-dns --alsologtostderr -v=1: (1.242394016s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-086339 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-086339 addons disable ingress --alsologtostderr -v=1: (7.787763763s)
--- FAIL: TestAddons/parallel/Ingress (491.94s)

                                                
                                    
x
+
TestAddons/parallel/CSI (379.69s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1101 09:53:06.313191   73998 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1101 09:53:06.319699   73998 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1101 09:53:06.319733   73998 kapi.go:107] duration metric: took 6.565008ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 6.577154ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-086339 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-086339 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-086339 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-086339 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-086339 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-086339 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-086339 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-086339 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-086339 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-086339 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-086339 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-086339 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [eb0ec6cf-d05a-4514-92a8-21a6ef18f433] Pending
helpers_test.go:352: "task-pv-pod" [eb0ec6cf-d05a-4514-92a8-21a6ef18f433] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:337: TestAddons/parallel/CSI: WARNING: pod list for "default" "app=task-pv-pod" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
addons_test.go:567: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:567: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-086339 -n addons-086339
addons_test.go:567: TestAddons/parallel/CSI: showing logs for failed pods as of 2025-11-01 09:59:15.883751684 +0000 UTC m=+570.625959674
addons_test.go:567: (dbg) Run:  kubectl --context addons-086339 describe po task-pv-pod -n default
addons_test.go:567: (dbg) kubectl --context addons-086339 describe po task-pv-pod -n default:
Name:             task-pv-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-086339/192.168.39.58
Start Time:       Sat, 01 Nov 2025 09:53:15 +0000
Labels:           app=task-pv-pod
Annotations:      <none>
Status:           Pending
IP:               10.244.0.30
IPs:
IP:  10.244.0.30
Containers:
task-pv-container:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           80/TCP (http-server)
Host Port:      0/TCP (http-server)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x27kl (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
task-pv-storage:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  hpvc
ReadOnly:   false
kube-api-access-x27kl:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  6m                     default-scheduler  Successfully assigned default/task-pv-pod to addons-086339
Warning  Failed     2m36s (x2 over 4m25s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     47s (x3 over 4m25s)    kubelet            Error: ErrImagePull
Warning  Failed     47s                    kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    12s (x5 over 4m25s)    kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     12s (x5 over 4m25s)    kubelet            Error: ImagePullBackOff
Normal   Pulling    1s (x4 over 5m59s)     kubelet            Pulling image "docker.io/nginx"
addons_test.go:567: (dbg) Run:  kubectl --context addons-086339 logs task-pv-pod -n default
addons_test.go:567: (dbg) Non-zero exit: kubectl --context addons-086339 logs task-pv-pod -n default: exit status 1 (80.950776ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:567: kubectl --context addons-086339 logs task-pv-pod -n default: exit status 1
addons_test.go:568: failed waiting for pod task-pv-pod: app=task-pv-pod within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/CSI]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-086339 -n addons-086339
helpers_test.go:252: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-086339 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-086339 logs -n 25: (1.355981847s)
helpers_test.go:260: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-319914                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-319914 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ start   │ -o=json --download-only -p download-only-036288 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-036288 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ minikube             │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ delete  │ -p download-only-036288                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-036288 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ delete  │ -p download-only-319914                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-319914 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ delete  │ -p download-only-036288                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-036288 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ start   │ --download-only -p binary-mirror-623089 --alsologtostderr --binary-mirror http://127.0.0.1:33603 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-623089 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │                     │
	│ delete  │ -p binary-mirror-623089                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-623089 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ addons  │ enable dashboard -p addons-086339                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │                     │
	│ addons  │ disable dashboard -p addons-086339                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │                     │
	│ start   │ -p addons-086339 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:52 UTC │
	│ addons  │ addons-086339 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:52 UTC │ 01 Nov 25 09:52 UTC │
	│ addons  │ addons-086339 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:52 UTC │ 01 Nov 25 09:52 UTC │
	│ addons  │ enable headlamp -p addons-086339 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:52 UTC │ 01 Nov 25 09:52 UTC │
	│ addons  │ addons-086339 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:52 UTC │ 01 Nov 25 09:52 UTC │
	│ addons  │ addons-086339 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:52 UTC │ 01 Nov 25 09:53 UTC │
	│ addons  │ addons-086339 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:53 UTC │ 01 Nov 25 09:53 UTC │
	│ addons  │ addons-086339 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:53 UTC │ 01 Nov 25 09:53 UTC │
	│ ip      │ addons-086339 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:53 UTC │ 01 Nov 25 09:53 UTC │
	│ addons  │ addons-086339 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:53 UTC │ 01 Nov 25 09:53 UTC │
	│ addons  │ addons-086339 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:53 UTC │ 01 Nov 25 09:53 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-086339                                                                                                                                                                                                                                                                                                                                                                                         │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:53 UTC │ 01 Nov 25 09:53 UTC │
	│ addons  │ addons-086339 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:53 UTC │ 01 Nov 25 09:53 UTC │
	│ addons  │ addons-086339 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	│ addons  │ addons-086339 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:55 UTC │ 01 Nov 25 09:56 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:49:57
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:49:57.488461   74584 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:49:57.488721   74584 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:49:57.488731   74584 out.go:374] Setting ErrFile to fd 2...
	I1101 09:49:57.488735   74584 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:49:57.488932   74584 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-70113/.minikube/bin
	I1101 09:49:57.489456   74584 out.go:368] Setting JSON to false
	I1101 09:49:57.490315   74584 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":5545,"bootTime":1761985052,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:49:57.490405   74584 start.go:143] virtualization: kvm guest
	I1101 09:49:57.492349   74584 out.go:179] * [addons-086339] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:49:57.493732   74584 notify.go:221] Checking for updates...
	I1101 09:49:57.493769   74584 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 09:49:57.495124   74584 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:49:57.496430   74584 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-70113/kubeconfig
	I1101 09:49:57.497763   74584 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-70113/.minikube
	I1101 09:49:57.499098   74584 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:49:57.500291   74584 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:49:57.501672   74584 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:49:57.530798   74584 out.go:179] * Using the kvm2 driver based on user configuration
	I1101 09:49:57.531916   74584 start.go:309] selected driver: kvm2
	I1101 09:49:57.531929   74584 start.go:930] validating driver "kvm2" against <nil>
	I1101 09:49:57.531940   74584 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:49:57.532704   74584 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 09:49:57.532950   74584 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:49:57.532995   74584 cni.go:84] Creating CNI manager for ""
	I1101 09:49:57.533055   74584 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 09:49:57.533066   74584 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1101 09:49:57.533123   74584 start.go:353] cluster config:
	{Name:addons-086339 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-086339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1101 09:49:57.533236   74584 iso.go:125] acquiring lock: {Name:mk49d9a272bb99d336f82dfc5631a4c8ce9271c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:49:57.534643   74584 out.go:179] * Starting "addons-086339" primary control-plane node in "addons-086339" cluster
	I1101 09:49:57.535623   74584 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:49:57.535667   74584 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-70113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 09:49:57.535680   74584 cache.go:59] Caching tarball of preloaded images
	I1101 09:49:57.535759   74584 preload.go:233] Found /home/jenkins/minikube-integration/21830-70113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 09:49:57.535771   74584 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:49:57.536122   74584 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/config.json ...
	I1101 09:49:57.536151   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/config.json: {Name:mka52b297897069cd677da03eb710fe0f89e4afc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:49:57.536283   74584 start.go:360] acquireMachinesLock for addons-086339: {Name:mk53a05d125fe91ead2a39c6bbf2ba926c471e2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 09:49:57.536359   74584 start.go:364] duration metric: took 60.989µs to acquireMachinesLock for "addons-086339"
	I1101 09:49:57.536383   74584 start.go:93] Provisioning new machine with config: &{Name:addons-086339 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-086339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:49:57.536443   74584 start.go:125] createHost starting for "" (driver="kvm2")
	I1101 09:49:57.537962   74584 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1101 09:49:57.538116   74584 start.go:159] libmachine.API.Create for "addons-086339" (driver="kvm2")
	I1101 09:49:57.538147   74584 client.go:173] LocalClient.Create starting
	I1101 09:49:57.538241   74584 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem
	I1101 09:49:57.899320   74584 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/cert.pem
	I1101 09:49:58.572079   74584 main.go:143] libmachine: creating domain...
	I1101 09:49:58.572106   74584 main.go:143] libmachine: creating network...
	I1101 09:49:58.573844   74584 main.go:143] libmachine: found existing default network
	I1101 09:49:58.574184   74584 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 09:49:58.574920   74584 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c7bfb0}
	I1101 09:49:58.575053   74584 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-086339</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 09:49:58.580872   74584 main.go:143] libmachine: creating private network mk-addons-086339 192.168.39.0/24...
	I1101 09:49:58.651337   74584 main.go:143] libmachine: private network mk-addons-086339 192.168.39.0/24 created
	I1101 09:49:58.651625   74584 main.go:143] libmachine: <network>
	  <name>mk-addons-086339</name>
	  <uuid>3e8e4cbf-1e3f-4b76-b08f-c763f9bae7dc</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:4f:55:bf'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 09:49:58.651651   74584 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339 ...
	I1101 09:49:58.651674   74584 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21830-70113/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso
	I1101 09:49:58.651685   74584 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21830-70113/.minikube
	I1101 09:49:58.651769   74584 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21830-70113/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21830-70113/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso...
	I1101 09:49:58.889523   74584 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa...
	I1101 09:49:59.320606   74584 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/addons-086339.rawdisk...
	I1101 09:49:59.320670   74584 main.go:143] libmachine: Writing magic tar header
	I1101 09:49:59.320695   74584 main.go:143] libmachine: Writing SSH key tar header
	I1101 09:49:59.320769   74584 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339 ...
	I1101 09:49:59.320832   74584 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339
	I1101 09:49:59.320855   74584 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339 (perms=drwx------)
	I1101 09:49:59.320865   74584 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21830-70113/.minikube/machines
	I1101 09:49:59.320880   74584 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21830-70113/.minikube/machines (perms=drwxr-xr-x)
	I1101 09:49:59.320892   74584 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21830-70113/.minikube
	I1101 09:49:59.320902   74584 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21830-70113/.minikube (perms=drwxr-xr-x)
	I1101 09:49:59.320910   74584 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21830-70113
	I1101 09:49:59.320919   74584 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21830-70113 (perms=drwxrwxr-x)
	I1101 09:49:59.320926   74584 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1101 09:49:59.320936   74584 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1101 09:49:59.320946   74584 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1101 09:49:59.320953   74584 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1101 09:49:59.320964   74584 main.go:143] libmachine: checking permissions on dir: /home
	I1101 09:49:59.320971   74584 main.go:143] libmachine: skipping /home - not owner
	I1101 09:49:59.320977   74584 main.go:143] libmachine: defining domain...
	I1101 09:49:59.322386   74584 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-086339</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/addons-086339.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-086339'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1101 09:49:59.327390   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:41:14:53 in network default
	I1101 09:49:59.328042   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:49:59.328057   74584 main.go:143] libmachine: starting domain...
	I1101 09:49:59.328062   74584 main.go:143] libmachine: ensuring networks are active...
	I1101 09:49:59.328857   74584 main.go:143] libmachine: Ensuring network default is active
	I1101 09:49:59.329422   74584 main.go:143] libmachine: Ensuring network mk-addons-086339 is active
	I1101 09:49:59.330127   74584 main.go:143] libmachine: getting domain XML...
	I1101 09:49:59.331370   74584 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-086339</name>
	  <uuid>a0be334a-213a-4e9a-bad3-6168cb6c4d93</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/addons-086339.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:b9:a4:85'/>
	      <source network='mk-addons-086339'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:41:14:53'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1101 09:50:00.609088   74584 main.go:143] libmachine: waiting for domain to start...
	I1101 09:50:00.610434   74584 main.go:143] libmachine: domain is now running
	I1101 09:50:00.610456   74584 main.go:143] libmachine: waiting for IP...
	I1101 09:50:00.611312   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:00.612106   74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
	I1101 09:50:00.612125   74584 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:50:00.612466   74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
	I1101 09:50:00.612543   74584 retry.go:31] will retry after 238.184391ms: waiting for domain to come up
	I1101 09:50:00.851957   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:00.852980   74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
	I1101 09:50:00.852999   74584 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:50:00.853378   74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
	I1101 09:50:00.853417   74584 retry.go:31] will retry after 315.459021ms: waiting for domain to come up
	I1101 09:50:01.170821   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:01.171618   74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
	I1101 09:50:01.171637   74584 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:50:01.172000   74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
	I1101 09:50:01.172045   74584 retry.go:31] will retry after 375.800667ms: waiting for domain to come up
	I1101 09:50:01.549768   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:01.550551   74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
	I1101 09:50:01.550568   74584 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:50:01.550912   74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
	I1101 09:50:01.550947   74584 retry.go:31] will retry after 436.650242ms: waiting for domain to come up
	I1101 09:50:01.989558   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:01.990329   74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
	I1101 09:50:01.990346   74584 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:50:01.990674   74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
	I1101 09:50:01.990717   74584 retry.go:31] will retry after 579.834412ms: waiting for domain to come up
	I1101 09:50:02.572692   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:02.573467   74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
	I1101 09:50:02.573488   74584 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:50:02.573815   74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
	I1101 09:50:02.573865   74584 retry.go:31] will retry after 839.063755ms: waiting for domain to come up
	I1101 09:50:03.414428   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:03.415319   74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
	I1101 09:50:03.415342   74584 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:50:03.415659   74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
	I1101 09:50:03.415702   74584 retry.go:31] will retry after 768.970672ms: waiting for domain to come up
	I1101 09:50:04.186700   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:04.187419   74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
	I1101 09:50:04.187437   74584 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:50:04.187709   74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
	I1101 09:50:04.187746   74584 retry.go:31] will retry after 1.192575866s: waiting for domain to come up
	I1101 09:50:05.382202   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:05.382884   74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
	I1101 09:50:05.382907   74584 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:50:05.383270   74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
	I1101 09:50:05.383321   74584 retry.go:31] will retry after 1.520355221s: waiting for domain to come up
	I1101 09:50:06.906019   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:06.906685   74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
	I1101 09:50:06.906702   74584 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:50:06.906966   74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
	I1101 09:50:06.907000   74584 retry.go:31] will retry after 1.452783326s: waiting for domain to come up
	I1101 09:50:08.361823   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:08.362686   74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
	I1101 09:50:08.362711   74584 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:50:08.363062   74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
	I1101 09:50:08.363109   74584 retry.go:31] will retry after 1.991395227s: waiting for domain to come up
	I1101 09:50:10.357523   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:10.358353   74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
	I1101 09:50:10.358372   74584 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:50:10.358693   74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
	I1101 09:50:10.358739   74584 retry.go:31] will retry after 3.532288823s: waiting for domain to come up
	I1101 09:50:13.893052   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:13.893671   74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
	I1101 09:50:13.893684   74584 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:50:13.893975   74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
	I1101 09:50:13.894012   74584 retry.go:31] will retry after 4.252229089s: waiting for domain to come up
	I1101 09:50:18.147616   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.148327   74584 main.go:143] libmachine: domain addons-086339 has current primary IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.148350   74584 main.go:143] libmachine: found domain IP: 192.168.39.58
	I1101 09:50:18.148365   74584 main.go:143] libmachine: reserving static IP address...
	I1101 09:50:18.148791   74584 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-086339", mac: "52:54:00:b9:a4:85", ip: "192.168.39.58"} in network mk-addons-086339
	I1101 09:50:18.327560   74584 main.go:143] libmachine: reserved static IP address 192.168.39.58 for domain addons-086339
	I1101 09:50:18.327599   74584 main.go:143] libmachine: waiting for SSH...
	I1101 09:50:18.327609   74584 main.go:143] libmachine: Getting to WaitForSSH function...
	I1101 09:50:18.330699   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.331371   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:18.331408   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.331641   74584 main.go:143] libmachine: Using SSH client type: native
	I1101 09:50:18.331928   74584 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1101 09:50:18.331942   74584 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1101 09:50:18.444329   74584 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:50:18.444817   74584 main.go:143] libmachine: domain creation complete
	I1101 09:50:18.446547   74584 machine.go:94] provisionDockerMachine start ...
	I1101 09:50:18.449158   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.449586   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:18.449617   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.449805   74584 main.go:143] libmachine: Using SSH client type: native
	I1101 09:50:18.450004   74584 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1101 09:50:18.450014   74584 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:50:18.560574   74584 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1101 09:50:18.560609   74584 buildroot.go:166] provisioning hostname "addons-086339"
	I1101 09:50:18.564015   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.564582   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:18.564616   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.564819   74584 main.go:143] libmachine: Using SSH client type: native
	I1101 09:50:18.565060   74584 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1101 09:50:18.565073   74584 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-086339 && echo "addons-086339" | sudo tee /etc/hostname
	I1101 09:50:18.692294   74584 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-086339
	
	I1101 09:50:18.695361   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.695730   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:18.695754   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.695958   74584 main.go:143] libmachine: Using SSH client type: native
	I1101 09:50:18.696217   74584 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1101 09:50:18.696238   74584 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-086339' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-086339/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-086339' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:50:18.817833   74584 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:50:18.817861   74584 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21830-70113/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-70113/.minikube}
	I1101 09:50:18.817917   74584 buildroot.go:174] setting up certificates
	I1101 09:50:18.817929   74584 provision.go:84] configureAuth start
	I1101 09:50:18.820836   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.821182   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:18.821205   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.823468   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.823880   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:18.823917   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.824065   74584 provision.go:143] copyHostCerts
	I1101 09:50:18.824126   74584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-70113/.minikube/ca.pem (1082 bytes)
	I1101 09:50:18.824236   74584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-70113/.minikube/cert.pem (1123 bytes)
	I1101 09:50:18.824293   74584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-70113/.minikube/key.pem (1675 bytes)
	I1101 09:50:18.824393   74584 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-70113/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca-key.pem org=jenkins.addons-086339 san=[127.0.0.1 192.168.39.58 addons-086339 localhost minikube]
	I1101 09:50:18.982158   74584 provision.go:177] copyRemoteCerts
	I1101 09:50:18.982222   74584 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:50:18.984649   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.985018   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:18.985044   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.985191   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:19.074666   74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 09:50:19.105450   74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1101 09:50:19.136079   74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:50:19.165744   74584 provision.go:87] duration metric: took 347.798818ms to configureAuth
	I1101 09:50:19.165785   74584 buildroot.go:189] setting minikube options for container-runtime
	I1101 09:50:19.165985   74584 config.go:182] Loaded profile config "addons-086339": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:50:19.168523   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.169168   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:19.169200   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.169383   74584 main.go:143] libmachine: Using SSH client type: native
	I1101 09:50:19.169583   74584 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1101 09:50:19.169597   74584 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:50:19.428804   74584 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:50:19.428828   74584 machine.go:97] duration metric: took 982.268013ms to provisionDockerMachine
	I1101 09:50:19.428839   74584 client.go:176] duration metric: took 21.890685225s to LocalClient.Create
	I1101 09:50:19.428858   74584 start.go:167] duration metric: took 21.89074228s to libmachine.API.Create "addons-086339"
	I1101 09:50:19.428865   74584 start.go:293] postStartSetup for "addons-086339" (driver="kvm2")
	I1101 09:50:19.428874   74584 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:50:19.428936   74584 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:50:19.431801   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.432251   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:19.432273   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.432405   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:19.520001   74584 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:50:19.525231   74584 info.go:137] Remote host: Buildroot 2025.02
	I1101 09:50:19.525259   74584 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-70113/.minikube/addons for local assets ...
	I1101 09:50:19.525321   74584 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-70113/.minikube/files for local assets ...
	I1101 09:50:19.525345   74584 start.go:296] duration metric: took 96.474195ms for postStartSetup
	I1101 09:50:19.528299   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.528696   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:19.528717   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.528916   74584 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/config.json ...
	I1101 09:50:19.529095   74584 start.go:128] duration metric: took 21.992639315s to createHost
	I1101 09:50:19.531331   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.531699   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:19.531722   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.531876   74584 main.go:143] libmachine: Using SSH client type: native
	I1101 09:50:19.532065   74584 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1101 09:50:19.532075   74584 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1101 09:50:19.643235   74584 main.go:143] libmachine: SSH cmd err, output: <nil>: 1761990619.607534656
	
	I1101 09:50:19.643257   74584 fix.go:216] guest clock: 1761990619.607534656
	I1101 09:50:19.643268   74584 fix.go:229] Guest: 2025-11-01 09:50:19.607534656 +0000 UTC Remote: 2025-11-01 09:50:19.52910603 +0000 UTC m=+22.094671738 (delta=78.428626ms)
	I1101 09:50:19.643283   74584 fix.go:200] guest clock delta is within tolerance: 78.428626ms
	I1101 09:50:19.643288   74584 start.go:83] releasing machines lock for "addons-086339", held for 22.106918768s
	I1101 09:50:19.646471   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.646896   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:19.646926   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.647587   74584 ssh_runner.go:195] Run: cat /version.json
	I1101 09:50:19.647618   74584 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:50:19.650456   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.650903   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:19.650929   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.650937   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.651111   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:19.651498   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:19.651548   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.651722   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:19.732914   74584 ssh_runner.go:195] Run: systemctl --version
	I1101 09:50:19.761438   74584 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:50:19.921978   74584 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:50:19.929230   74584 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:50:19.929321   74584 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:50:19.949743   74584 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 09:50:19.949779   74584 start.go:496] detecting cgroup driver to use...
	I1101 09:50:19.949851   74584 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:50:19.969767   74584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:50:19.988383   74584 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:50:19.988445   74584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:50:20.006528   74584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:50:20.025137   74584 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:50:20.177314   74584 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:50:20.388642   74584 docker.go:234] disabling docker service ...
	I1101 09:50:20.388724   74584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:50:20.405986   74584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:50:20.421236   74584 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:50:20.585305   74584 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:50:20.731424   74584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:50:20.748134   74584 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:50:20.778555   74584 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:50:20.778621   74584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:50:20.792483   74584 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 09:50:20.792563   74584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:50:20.806228   74584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:50:20.819314   74584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:50:20.832971   74584 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:50:20.847580   74584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:50:20.861416   74584 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:50:20.884021   74584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:50:20.898082   74584 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:50:20.909995   74584 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 09:50:20.910054   74584 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 09:50:20.932503   74584 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:50:20.945456   74584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:50:21.091518   74584 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:50:21.209311   74584 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:50:21.209394   74584 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:50:21.215638   74584 start.go:564] Will wait 60s for crictl version
	I1101 09:50:21.215718   74584 ssh_runner.go:195] Run: which crictl
	I1101 09:50:21.220104   74584 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 09:50:21.265319   74584 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1101 09:50:21.265428   74584 ssh_runner.go:195] Run: crio --version
	I1101 09:50:21.296407   74584 ssh_runner.go:195] Run: crio --version
	I1101 09:50:21.330270   74584 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1101 09:50:21.333966   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:21.334360   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:21.334382   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:21.334577   74584 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1101 09:50:21.339385   74584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:50:21.355743   74584 kubeadm.go:884] updating cluster {Name:addons-086339 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-086339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:50:21.355864   74584 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:50:21.355925   74584 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:50:21.393026   74584 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1101 09:50:21.393097   74584 ssh_runner.go:195] Run: which lz4
	I1101 09:50:21.397900   74584 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1101 09:50:21.403032   74584 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 09:50:21.403064   74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1101 09:50:22.958959   74584 crio.go:462] duration metric: took 1.561103562s to copy over tarball
	I1101 09:50:22.959030   74584 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 09:50:24.646069   74584 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.687012473s)
	I1101 09:50:24.646110   74584 crio.go:469] duration metric: took 1.687120275s to extract the tarball
	I1101 09:50:24.646124   74584 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 09:50:24.689384   74584 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:50:24.745551   74584 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:50:24.745581   74584 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:50:24.745590   74584 kubeadm.go:935] updating node { 192.168.39.58 8443 v1.34.1 crio true true} ...
	I1101 09:50:24.745676   74584 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-086339 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-086339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:50:24.745742   74584 ssh_runner.go:195] Run: crio config
	I1101 09:50:24.792600   74584 cni.go:84] Creating CNI manager for ""
	I1101 09:50:24.792624   74584 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 09:50:24.792643   74584 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:50:24.792678   74584 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.58 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-086339 NodeName:addons-086339 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:50:24.792797   74584 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.58
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-086339"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.58"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.58"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:50:24.792863   74584 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:50:24.805312   74584 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:50:24.805386   74584 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:50:24.817318   74584 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1101 09:50:24.839738   74584 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:50:24.861206   74584 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1101 09:50:24.882598   74584 ssh_runner.go:195] Run: grep 192.168.39.58	control-plane.minikube.internal$ /etc/hosts
	I1101 09:50:24.887202   74584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.58	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:50:24.903393   74584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:50:25.046563   74584 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:50:25.078339   74584 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339 for IP: 192.168.39.58
	I1101 09:50:25.078373   74584 certs.go:195] generating shared ca certs ...
	I1101 09:50:25.078393   74584 certs.go:227] acquiring lock for ca certs: {Name:mk20731b316fbc22c351241cefc40924880eeba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:50:25.078607   74584 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-70113/.minikube/ca.key
	I1101 09:50:25.370750   74584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-70113/.minikube/ca.crt ...
	I1101 09:50:25.370787   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/ca.crt: {Name:mk44e2ef3879300ef465f5e14a88e17a335203c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:50:25.370979   74584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-70113/.minikube/ca.key ...
	I1101 09:50:25.370991   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/ca.key: {Name:mk6a6a936cb10734e248a5e184dc212d0dd50fee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:50:25.371084   74584 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.key
	I1101 09:50:25.596029   74584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.crt ...
	I1101 09:50:25.596060   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.crt: {Name:mk4883ce1337edc02ddc3ac7b72fc885fc718a43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:50:25.596251   74584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.key ...
	I1101 09:50:25.596263   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.key: {Name:mk64aaf400461d117ff2d2f246459980ad32acba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:50:25.596345   74584 certs.go:257] generating profile certs ...
	I1101 09:50:25.596402   74584 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.key
	I1101 09:50:25.596427   74584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt with IP's: []
	I1101 09:50:25.837595   74584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt ...
	I1101 09:50:25.837629   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt: {Name:mk6a3c2908e98c5011b9a353eff3f73fbb200e4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:50:25.837800   74584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.key ...
	I1101 09:50:25.837814   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.key: {Name:mke495d2d15563b5194e6cade83d0c75b9212db7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:50:25.837890   74584 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.key.698c417c
	I1101 09:50:25.837920   74584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.crt.698c417c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.58]
	I1101 09:50:25.933112   74584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.crt.698c417c ...
	I1101 09:50:25.933142   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.crt.698c417c: {Name:mk0254e8775842aca5cd671155531f1ec86ec40f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:50:25.933311   74584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.key.698c417c ...
	I1101 09:50:25.933328   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.key.698c417c: {Name:mk3e1746ccfcc3989b4b0944f75fafe8929108a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:50:25.933413   74584 certs.go:382] copying /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.crt.698c417c -> /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.crt
	I1101 09:50:25.933491   74584 certs.go:386] copying /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.key.698c417c -> /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.key
	I1101 09:50:25.933552   74584 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/proxy-client.key
	I1101 09:50:25.933569   74584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/proxy-client.crt with IP's: []
	I1101 09:50:26.270478   74584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/proxy-client.crt ...
	I1101 09:50:26.270513   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/proxy-client.crt: {Name:mk40ee0c5f510c6df044b64c5c0ccf02f754f518 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:50:26.270707   74584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/proxy-client.key ...
	I1101 09:50:26.270719   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/proxy-client.key: {Name:mk13d4f8cab34676a9c94f4e51f06fa6b4450e4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:50:26.270893   74584 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:50:26.270934   74584 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem (1082 bytes)
	I1101 09:50:26.270958   74584 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:50:26.270980   74584 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/key.pem (1675 bytes)
	I1101 09:50:26.271524   74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:50:26.304432   74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 09:50:26.336585   74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:50:26.370965   74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:50:26.404637   74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1101 09:50:26.438434   74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 09:50:26.470419   74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:50:26.505400   74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 09:50:26.538739   74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:50:26.571139   74584 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:50:26.596933   74584 ssh_runner.go:195] Run: openssl version
	I1101 09:50:26.604814   74584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:50:26.625168   74584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:50:26.631403   74584 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:50 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:50:26.631463   74584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:50:26.639666   74584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:50:26.655106   74584 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:50:26.660616   74584 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 09:50:26.660681   74584 kubeadm.go:401] StartCluster: {Name:addons-086339 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-086339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:50:26.660767   74584 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:50:26.660830   74584 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:50:26.713279   74584 cri.go:89] found id: ""
	I1101 09:50:26.713354   74584 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:50:26.732360   74584 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 09:50:26.753939   74584 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 09:50:26.768399   74584 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 09:50:26.768428   74584 kubeadm.go:158] found existing configuration files:
	
	I1101 09:50:26.768509   74584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 09:50:26.780652   74584 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 09:50:26.780726   74584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 09:50:26.792996   74584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 09:50:26.805190   74584 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 09:50:26.805252   74584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 09:50:26.817970   74584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 09:50:26.829425   74584 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 09:50:26.829521   74584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 09:50:26.842392   74584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 09:50:26.855031   74584 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 09:50:26.855120   74584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 09:50:26.868465   74584 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1101 09:50:27.034423   74584 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 09:50:40.596085   74584 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 09:50:40.596157   74584 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 09:50:40.596234   74584 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 09:50:40.596323   74584 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 09:50:40.596395   74584 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 09:50:40.596501   74584 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 09:50:40.598485   74584 out.go:252]   - Generating certificates and keys ...
	I1101 09:50:40.598596   74584 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 09:50:40.598677   74584 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 09:50:40.598786   74584 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 09:50:40.598884   74584 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 09:50:40.598965   74584 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 09:50:40.599020   74584 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 09:50:40.599097   74584 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 09:50:40.599235   74584 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-086339 localhost] and IPs [192.168.39.58 127.0.0.1 ::1]
	I1101 09:50:40.599294   74584 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 09:50:40.599486   74584 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-086339 localhost] and IPs [192.168.39.58 127.0.0.1 ::1]
	I1101 09:50:40.599578   74584 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 09:50:40.599671   74584 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 09:50:40.599744   74584 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 09:50:40.599837   74584 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 09:50:40.599908   74584 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 09:50:40.599990   74584 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 09:50:40.600070   74584 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 09:50:40.600159   74584 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 09:50:40.600236   74584 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 09:50:40.600342   74584 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 09:50:40.600430   74584 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 09:50:40.601841   74584 out.go:252]   - Booting up control plane ...
	I1101 09:50:40.601953   74584 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 09:50:40.602064   74584 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 09:50:40.602160   74584 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 09:50:40.602298   74584 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 09:50:40.602458   74584 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 09:50:40.602614   74584 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 09:50:40.602706   74584 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 09:50:40.602764   74584 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 09:50:40.602925   74584 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 09:50:40.603084   74584 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 09:50:40.603174   74584 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002004831s
	I1101 09:50:40.603300   74584 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 09:50:40.603404   74584 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.58:8443/livez
	I1101 09:50:40.603516   74584 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 09:50:40.603630   74584 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 09:50:40.603719   74584 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.147708519s
	I1101 09:50:40.603845   74584 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.505964182s
	I1101 09:50:40.603957   74584 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.503174092s
	I1101 09:50:40.604099   74584 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 09:50:40.604336   74584 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 09:50:40.604410   74584 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 09:50:40.604590   74584 kubeadm.go:319] [mark-control-plane] Marking the node addons-086339 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 09:50:40.604649   74584 kubeadm.go:319] [bootstrap-token] Using token: n6ooj1.g2r52lt9s64k7lzx
	I1101 09:50:40.606300   74584 out.go:252]   - Configuring RBAC rules ...
	I1101 09:50:40.606413   74584 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 09:50:40.606488   74584 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 09:50:40.606682   74584 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 09:50:40.606839   74584 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 09:50:40.607006   74584 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 09:50:40.607114   74584 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 09:50:40.607229   74584 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 09:50:40.607269   74584 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 09:50:40.607307   74584 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 09:50:40.607312   74584 kubeadm.go:319] 
	I1101 09:50:40.607359   74584 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 09:50:40.607364   74584 kubeadm.go:319] 
	I1101 09:50:40.607423   74584 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 09:50:40.607428   74584 kubeadm.go:319] 
	I1101 09:50:40.607448   74584 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 09:50:40.607512   74584 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 09:50:40.607591   74584 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 09:50:40.607600   74584 kubeadm.go:319] 
	I1101 09:50:40.607669   74584 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 09:50:40.607677   74584 kubeadm.go:319] 
	I1101 09:50:40.607717   74584 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 09:50:40.607722   74584 kubeadm.go:319] 
	I1101 09:50:40.607785   74584 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 09:50:40.607880   74584 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 09:50:40.607975   74584 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 09:50:40.607984   74584 kubeadm.go:319] 
	I1101 09:50:40.608100   74584 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 09:50:40.608199   74584 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 09:50:40.608211   74584 kubeadm.go:319] 
	I1101 09:50:40.608275   74584 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token n6ooj1.g2r52lt9s64k7lzx \
	I1101 09:50:40.608412   74584 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ad8ee8749587d4da67d76f75358688c9a611301f34b35f940a9e7fa320504c7a \
	I1101 09:50:40.608438   74584 kubeadm.go:319] 	--control-plane 
	I1101 09:50:40.608444   74584 kubeadm.go:319] 
	I1101 09:50:40.608584   74584 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 09:50:40.608595   74584 kubeadm.go:319] 
	I1101 09:50:40.608701   74584 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token n6ooj1.g2r52lt9s64k7lzx \
	I1101 09:50:40.608845   74584 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ad8ee8749587d4da67d76f75358688c9a611301f34b35f940a9e7fa320504c7a 
	I1101 09:50:40.608868   74584 cni.go:84] Creating CNI manager for ""
	I1101 09:50:40.608880   74584 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 09:50:40.610610   74584 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 09:50:40.612071   74584 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 09:50:40.627372   74584 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1101 09:50:40.653117   74584 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 09:50:40.653226   74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-086339 minikube.k8s.io/updated_at=2025_11_01T09_50_40_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845 minikube.k8s.io/name=addons-086339 minikube.k8s.io/primary=true
	I1101 09:50:40.653234   74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:50:40.841062   74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:50:40.841065   74584 ops.go:34] apiserver oom_adj: -16
	I1101 09:50:41.341444   74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:50:41.841738   74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:50:42.341137   74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:50:42.841859   74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:50:43.341430   74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:50:43.842032   74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:50:44.341776   74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:50:44.842146   74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:50:45.342151   74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:50:45.471694   74584 kubeadm.go:1114] duration metric: took 4.818566134s to wait for elevateKubeSystemPrivileges
	I1101 09:50:45.471741   74584 kubeadm.go:403] duration metric: took 18.811065248s to StartCluster
	I1101 09:50:45.471765   74584 settings.go:142] acquiring lock: {Name:mk26e3d3b2448df59827bb1be60cde1d117dbc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:50:45.471940   74584 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-70113/kubeconfig
	I1101 09:50:45.472382   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/kubeconfig: {Name:mk1fa0677a0758214359bfdd6f326495ee5fd60e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:50:45.472671   74584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 09:50:45.472717   74584 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:50:45.472765   74584 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1101 09:50:45.472916   74584 addons.go:70] Setting yakd=true in profile "addons-086339"
	I1101 09:50:45.472917   74584 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-086339"
	I1101 09:50:45.472959   74584 addons.go:239] Setting addon yakd=true in "addons-086339"
	I1101 09:50:45.472963   74584 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-086339"
	I1101 09:50:45.472976   74584 addons.go:70] Setting registry=true in profile "addons-086339"
	I1101 09:50:45.472991   74584 addons.go:239] Setting addon registry=true in "addons-086339"
	I1101 09:50:45.473004   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.473010   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.473012   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.473003   74584 addons.go:70] Setting metrics-server=true in profile "addons-086339"
	I1101 09:50:45.473051   74584 addons.go:70] Setting registry-creds=true in profile "addons-086339"
	I1101 09:50:45.473068   74584 addons.go:239] Setting addon metrics-server=true in "addons-086339"
	I1101 09:50:45.473084   74584 addons.go:239] Setting addon registry-creds=true in "addons-086339"
	I1101 09:50:45.473121   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.473144   74584 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-086339"
	I1101 09:50:45.473150   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.473175   74584 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-086339"
	I1101 09:50:45.473203   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.473564   74584 addons.go:70] Setting volcano=true in profile "addons-086339"
	I1101 09:50:45.473589   74584 addons.go:239] Setting addon volcano=true in "addons-086339"
	I1101 09:50:45.473622   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.473737   74584 addons.go:70] Setting gcp-auth=true in profile "addons-086339"
	I1101 09:50:45.473786   74584 mustload.go:66] Loading cluster: addons-086339
	I1101 09:50:45.474010   74584 config.go:182] Loaded profile config "addons-086339": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:50:45.474219   74584 addons.go:70] Setting ingress-dns=true in profile "addons-086339"
	I1101 09:50:45.474254   74584 addons.go:239] Setting addon ingress-dns=true in "addons-086339"
	I1101 09:50:45.474313   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.472963   74584 config.go:182] Loaded profile config "addons-086339": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:50:45.473011   74584 addons.go:70] Setting storage-provisioner=true in profile "addons-086339"
	I1101 09:50:45.474667   74584 addons.go:239] Setting addon storage-provisioner=true in "addons-086339"
	I1101 09:50:45.474685   74584 addons.go:70] Setting cloud-spanner=true in profile "addons-086339"
	I1101 09:50:45.474699   74584 addons.go:239] Setting addon cloud-spanner=true in "addons-086339"
	I1101 09:50:45.474703   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.474721   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.474993   74584 addons.go:70] Setting volumesnapshots=true in profile "addons-086339"
	I1101 09:50:45.475011   74584 addons.go:239] Setting addon volumesnapshots=true in "addons-086339"
	I1101 09:50:45.475031   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.475344   74584 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-086339"
	I1101 09:50:45.475368   74584 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-086339"
	I1101 09:50:45.475372   74584 addons.go:70] Setting default-storageclass=true in profile "addons-086339"
	I1101 09:50:45.475392   74584 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-086339"
	I1101 09:50:45.475482   74584 addons.go:70] Setting ingress=true in profile "addons-086339"
	I1101 09:50:45.475497   74584 addons.go:239] Setting addon ingress=true in "addons-086339"
	I1101 09:50:45.475549   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.474669   74584 addons.go:70] Setting inspektor-gadget=true in profile "addons-086339"
	I1101 09:50:45.475789   74584 addons.go:239] Setting addon inspektor-gadget=true in "addons-086339"
	I1101 09:50:45.475796   74584 out.go:179] * Verifying Kubernetes components...
	I1101 09:50:45.475819   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.474680   74584 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-086339"
	I1101 09:50:45.476065   74584 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-086339"
	I1101 09:50:45.476115   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.477255   74584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:50:45.480031   74584 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1101 09:50:45.480031   74584 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1101 09:50:45.480033   74584 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	W1101 09:50:45.481113   74584 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1101 09:50:45.481446   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.484726   74584 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 09:50:45.484753   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1101 09:50:45.484938   74584 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1101 09:50:45.484960   74584 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 09:50:45.484966   74584 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1101 09:50:45.484973   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1101 09:50:45.485125   74584 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1101 09:50:45.485153   74584 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1101 09:50:45.485273   74584 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-086339"
	I1101 09:50:45.485691   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.485920   74584 addons.go:239] Setting addon default-storageclass=true in "addons-086339"
	I1101 09:50:45.485962   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.487450   74584 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1101 09:50:45.487459   74584 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1101 09:50:45.487484   74584 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1101 09:50:45.487497   74584 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1101 09:50:45.487517   74584 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:50:45.487560   74584 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1101 09:50:45.487563   74584 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1101 09:50:45.488316   74584 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1101 09:50:45.488329   74584 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 09:50:45.488348   74584 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 09:50:45.489625   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1101 09:50:45.489651   74584 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 09:50:45.489699   74584 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:50:45.489902   74584 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:50:45.490208   74584 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1101 09:50:45.490224   74584 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1101 09:50:45.490262   74584 addons.go:436] installing /etc/kubernetes/addons/ig-crd.yaml
	I1101 09:50:45.490750   74584 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1101 09:50:45.491163   74584 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1101 09:50:45.491557   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1101 09:50:45.491173   74584 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1101 09:50:45.491207   74584 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:50:45.491713   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:50:45.491208   74584 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 09:50:45.491791   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1101 09:50:45.491917   74584 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 09:50:45.492081   74584 out.go:179]   - Using image docker.io/registry:3.0.0
	I1101 09:50:45.492774   74584 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1101 09:50:45.493050   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.493676   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.494048   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.494216   74584 out.go:179]   - Using image docker.io/busybox:stable
	I1101 09:50:45.494271   74584 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1101 09:50:45.494283   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1101 09:50:45.494189   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:45.494412   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.495222   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:45.495346   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.495450   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:45.495550   74584 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1101 09:50:45.495608   74584 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 09:50:45.495670   74584 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 09:50:45.495688   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1101 09:50:45.495797   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:45.495840   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.496406   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:45.496819   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:45.497603   74584 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 09:50:45.497622   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1101 09:50:45.498607   74584 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1101 09:50:45.500140   74584 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1101 09:50:45.500156   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.500745   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.500905   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.501448   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:45.501490   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.501945   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.502137   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.502129   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:45.502357   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:45.502386   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.502479   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.502618   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:45.502659   74584 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1101 09:50:45.502626   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:45.502671   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.502621   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.503336   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:45.503381   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.503456   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:45.503481   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.503494   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.503740   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:45.503831   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:45.503858   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.503858   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:45.503886   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:45.504294   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:45.504670   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:45.504706   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.504708   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.504783   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:45.504812   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.504989   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:45.505241   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:45.505275   74584 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1101 09:50:45.505416   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:45.505439   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.505646   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:45.505919   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.506301   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:45.506330   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.506479   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:45.506657   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.507207   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:45.507243   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.507456   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:45.507843   74584 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1101 09:50:45.509235   74584 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1101 09:50:45.509251   74584 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1101 09:50:45.511923   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.512313   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:45.512339   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.512478   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	W1101 09:50:45.863592   74584 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:56024->192.168.39.58:22: read: connection reset by peer
	I1101 09:50:45.863626   74584 retry.go:31] will retry after 353.468022ms: ssh: handshake failed: read tcp 192.168.39.1:56024->192.168.39.58:22: read: connection reset by peer
	W1101 09:50:45.863706   74584 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:56030->192.168.39.58:22: read: connection reset by peer
	I1101 09:50:45.863718   74584 retry.go:31] will retry after 366.435822ms: ssh: handshake failed: read tcp 192.168.39.1:56030->192.168.39.58:22: read: connection reset by peer
	I1101 09:50:46.204700   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:50:46.344397   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 09:50:46.364416   74584 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 09:50:46.364443   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1101 09:50:46.382914   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 09:50:46.401116   74584 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1101 09:50:46.401152   74584 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1101 09:50:46.499674   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:50:46.525387   74584 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1101 09:50:46.525422   74584 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1101 09:50:46.528653   74584 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:50:46.528683   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1101 09:50:46.537039   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 09:50:46.585103   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1101 09:50:46.700077   74584 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1101 09:50:46.700117   74584 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1101 09:50:46.802990   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 09:50:46.845193   74584 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1101 09:50:46.845228   74584 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1101 09:50:46.948887   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:50:47.114091   74584 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1101 09:50:47.114126   74584 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1101 09:50:47.173908   74584 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.701178901s)
	I1101 09:50:47.173921   74584 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.696642998s)
	I1101 09:50:47.173999   74584 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:50:47.174095   74584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 09:50:47.203736   74584 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 09:50:47.203782   74584 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 09:50:47.327504   74584 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1101 09:50:47.327541   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1101 09:50:47.447307   74584 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1101 09:50:47.447333   74584 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1101 09:50:47.479289   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 09:50:47.516143   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 09:50:47.537776   74584 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1101 09:50:47.537808   74584 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1101 09:50:47.602456   74584 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 09:50:47.602492   74584 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 09:50:47.634301   74584 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1101 09:50:47.634334   74584 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1101 09:50:47.666382   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1101 09:50:47.896414   74584 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1101 09:50:47.896454   74584 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1101 09:50:48.070881   74584 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1101 09:50:48.070918   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1101 09:50:48.088172   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 09:50:48.112581   74584 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1101 09:50:48.112615   74584 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1101 09:50:48.384804   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.180058223s)
	I1101 09:50:48.433222   74584 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1101 09:50:48.433251   74584 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1101 09:50:48.570103   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1101 09:50:48.712201   74584 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1101 09:50:48.712239   74584 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1101 09:50:48.761409   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (2.41696863s)
	I1101 09:50:49.019503   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.636542693s)
	I1101 09:50:49.055833   74584 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1101 09:50:49.055864   74584 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1101 09:50:49.130302   74584 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 09:50:49.130330   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1101 09:50:49.321757   74584 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1101 09:50:49.321783   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1101 09:50:49.571119   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 09:50:49.804708   74584 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1101 09:50:49.804738   74584 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1101 09:50:49.962509   74584 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1101 09:50:49.962544   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1101 09:50:50.281087   74584 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1101 09:50:50.281117   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1101 09:50:50.772055   74584 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 09:50:50.772080   74584 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1101 09:50:51.239409   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 09:50:52.962797   74584 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1101 09:50:52.966311   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:52.966764   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:52.966789   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:52.966934   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:53.227038   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.727328057s)
	I1101 09:50:53.227151   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.69006708s)
	I1101 09:50:53.227189   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.642046598s)
	I1101 09:50:53.227242   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.424224705s)
	I1101 09:50:53.376728   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (6.427801852s)
	W1101 09:50:53.376771   74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:50:53.376826   74584 retry.go:31] will retry after 359.696332ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:50:53.376871   74584 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.202843079s)
	I1101 09:50:53.376921   74584 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (6.202805311s)
	I1101 09:50:53.376950   74584 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1101 09:50:53.377909   74584 node_ready.go:35] waiting up to 6m0s for node "addons-086339" to be "Ready" ...
	I1101 09:50:53.462748   74584 node_ready.go:49] node "addons-086339" is "Ready"
	I1101 09:50:53.462778   74584 node_ready.go:38] duration metric: took 84.807458ms for node "addons-086339" to be "Ready" ...
	I1101 09:50:53.462793   74584 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:50:53.462847   74584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:50:53.534003   74584 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1101 09:50:53.650576   74584 addons.go:239] Setting addon gcp-auth=true in "addons-086339"
	I1101 09:50:53.650630   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:53.652687   74584 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1101 09:50:53.655511   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:53.655896   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:53.655920   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:53.656060   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:53.737577   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:50:53.969325   74584 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-086339" context rescaled to 1 replicas
	I1101 09:50:55.148780   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.669443662s)
	I1101 09:50:55.148826   74584 addons.go:480] Verifying addon ingress=true in "addons-086339"
	I1101 09:50:55.148852   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.632675065s)
	I1101 09:50:55.148956   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.482535978s)
	I1101 09:50:55.149057   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.060852546s)
	I1101 09:50:55.149064   74584 addons.go:480] Verifying addon registry=true in "addons-086339"
	I1101 09:50:55.149094   74584 addons.go:480] Verifying addon metrics-server=true in "addons-086339"
	I1101 09:50:55.149162   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.579011593s)
	I1101 09:50:55.150934   74584 out.go:179] * Verifying ingress addon...
	I1101 09:50:55.150992   74584 out.go:179] * Verifying registry addon...
	I1101 09:50:55.151019   74584 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-086339 service yakd-dashboard -n yakd-dashboard
	
	I1101 09:50:55.152636   74584 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1101 09:50:55.152833   74584 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1101 09:50:55.236576   74584 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1101 09:50:55.236603   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:50:55.236704   74584 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1101 09:50:55.236726   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:50:55.608860   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.037686923s)
	W1101 09:50:55.608910   74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 09:50:55.608932   74584 retry.go:31] will retry after 233.800882ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 09:50:55.697978   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:50:55.698030   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:50:55.843247   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 09:50:56.241749   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:50:56.241968   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:50:56.550655   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.311175816s)
	I1101 09:50:56.550716   74584 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-086339"
	I1101 09:50:56.550663   74584 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.087794232s)
	I1101 09:50:56.550810   74584 api_server.go:72] duration metric: took 11.078058308s to wait for apiserver process to appear ...
	I1101 09:50:56.550891   74584 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:50:56.550935   74584 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1101 09:50:56.552309   74584 out.go:179] * Verifying csi-hostpath-driver addon...
	I1101 09:50:56.554454   74584 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1101 09:50:56.566874   74584 api_server.go:279] https://192.168.39.58:8443/healthz returned 200:
	ok
	I1101 09:50:56.569220   74584 api_server.go:141] control plane version: v1.34.1
	I1101 09:50:56.569247   74584 api_server.go:131] duration metric: took 18.347182ms to wait for apiserver health ...
	I1101 09:50:56.569258   74584 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:50:56.586752   74584 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1101 09:50:56.586776   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:50:56.587214   74584 system_pods.go:59] 20 kube-system pods found
	I1101 09:50:56.587266   74584 system_pods.go:61] "amd-gpu-device-plugin-lr4lw" [bee1e3ae-5d43-4b43-a348-0e04ec066093] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 09:50:56.587277   74584 system_pods.go:61] "coredns-66bc5c9577-5v6h7" [ff58ca9c-6949-4ab8-b8ff-8be8e7b75757] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:50:56.587289   74584 system_pods.go:61] "coredns-66bc5c9577-vsbrs" [c3a65dae-82f4-4f33-b460-fa45a39b3342] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:50:56.587297   74584 system_pods.go:61] "csi-hostpath-attacher-0" [50e03a30-f2e9-4ec1-ba85-6da2654030c9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 09:50:56.587304   74584 system_pods.go:61] "csi-hostpath-resizer-0" [d2c565f0-80a3-4b2d-a99b-edc1d7ae4fe2] Pending
	I1101 09:50:56.587318   74584 system_pods.go:61] "csi-hostpathplugin-z7vjp" [96e87cd6-068d-40af-9966-b875b9a7629e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 09:50:56.587325   74584 system_pods.go:61] "etcd-addons-086339" [f17e5eab-51c0-409a-9bb3-3cb5e71200fd] Running
	I1101 09:50:56.587336   74584 system_pods.go:61] "kube-apiserver-addons-086339" [51b3d29f-af5e-441a-b3c0-754241fc92bc] Running
	I1101 09:50:56.587343   74584 system_pods.go:61] "kube-controller-manager-addons-086339" [62d54b81-f6bc-4bdc-bd22-c8a6fc39a043] Running
	I1101 09:50:56.587352   74584 system_pods.go:61] "kube-ingress-dns-minikube" [e328fd3e-a381-414d-ba99-1aa6f7f40585] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 09:50:56.587357   74584 system_pods.go:61] "kube-proxy-7fck9" [a834adcc-b0ec-4cad-8944-bea90a627787] Running
	I1101 09:50:56.587365   74584 system_pods.go:61] "kube-scheduler-addons-086339" [4db76834-5184-4a83-a228-35e83abc8c9d] Running
	I1101 09:50:56.587372   74584 system_pods.go:61] "metrics-server-85b7d694d7-6lx9r" [c4e44e90-7d77-43fc-913f-f26877e52760] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:50:56.587378   74584 system_pods.go:61] "nvidia-device-plugin-daemonset-jh2xq" [0a9234e2-8d6a-4110-86be-ff05f9be1a29] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 09:50:56.587387   74584 system_pods.go:61] "registry-6b586f9694-8zvc5" [23d65f21-71d0-4da4-8f2f-5b59f93f9085] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:50:56.587395   74584 system_pods.go:61] "registry-creds-764b6fb674-ztjtq" [ae641ce9-b248-46a3-8e01-9d25e8d29825] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:50:56.587408   74584 system_pods.go:61] "registry-proxy-4p4n9" [73d260fc-8c68-439c-a460-208cdb29b271] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 09:50:56.587416   74584 system_pods.go:61] "snapshot-controller-7d9fbc56b8-4kwxj" [e301a0c5-17dc-43be-9fd5-c14b76c1b92c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:50:56.587429   74584 system_pods.go:61] "snapshot-controller-7d9fbc56b8-wzgp7" [4c770fa7-174c-43ab-ac63-635b19152843] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:50:56.587437   74584 system_pods.go:61] "storage-provisioner" [4c394064-33ff-4fd0-a4bc-afb948952ac6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:50:56.587448   74584 system_pods.go:74] duration metric: took 18.182475ms to wait for pod list to return data ...
	I1101 09:50:56.587460   74584 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:50:56.596967   74584 default_sa.go:45] found service account: "default"
	I1101 09:50:56.596990   74584 default_sa.go:55] duration metric: took 9.524828ms for default service account to be created ...
	I1101 09:50:56.596999   74584 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:50:56.613956   74584 system_pods.go:86] 20 kube-system pods found
	I1101 09:50:56.613988   74584 system_pods.go:89] "amd-gpu-device-plugin-lr4lw" [bee1e3ae-5d43-4b43-a348-0e04ec066093] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 09:50:56.613995   74584 system_pods.go:89] "coredns-66bc5c9577-5v6h7" [ff58ca9c-6949-4ab8-b8ff-8be8e7b75757] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:50:56.614003   74584 system_pods.go:89] "coredns-66bc5c9577-vsbrs" [c3a65dae-82f4-4f33-b460-fa45a39b3342] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:50:56.614009   74584 system_pods.go:89] "csi-hostpath-attacher-0" [50e03a30-f2e9-4ec1-ba85-6da2654030c9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 09:50:56.614014   74584 system_pods.go:89] "csi-hostpath-resizer-0" [d2c565f0-80a3-4b2d-a99b-edc1d7ae4fe2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 09:50:56.614020   74584 system_pods.go:89] "csi-hostpathplugin-z7vjp" [96e87cd6-068d-40af-9966-b875b9a7629e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 09:50:56.614023   74584 system_pods.go:89] "etcd-addons-086339" [f17e5eab-51c0-409a-9bb3-3cb5e71200fd] Running
	I1101 09:50:56.614028   74584 system_pods.go:89] "kube-apiserver-addons-086339" [51b3d29f-af5e-441a-b3c0-754241fc92bc] Running
	I1101 09:50:56.614033   74584 system_pods.go:89] "kube-controller-manager-addons-086339" [62d54b81-f6bc-4bdc-bd22-c8a6fc39a043] Running
	I1101 09:50:56.614040   74584 system_pods.go:89] "kube-ingress-dns-minikube" [e328fd3e-a381-414d-ba99-1aa6f7f40585] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 09:50:56.614045   74584 system_pods.go:89] "kube-proxy-7fck9" [a834adcc-b0ec-4cad-8944-bea90a627787] Running
	I1101 09:50:56.614051   74584 system_pods.go:89] "kube-scheduler-addons-086339" [4db76834-5184-4a83-a228-35e83abc8c9d] Running
	I1101 09:50:56.614058   74584 system_pods.go:89] "metrics-server-85b7d694d7-6lx9r" [c4e44e90-7d77-43fc-913f-f26877e52760] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:50:56.614073   74584 system_pods.go:89] "nvidia-device-plugin-daemonset-jh2xq" [0a9234e2-8d6a-4110-86be-ff05f9be1a29] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 09:50:56.614089   74584 system_pods.go:89] "registry-6b586f9694-8zvc5" [23d65f21-71d0-4da4-8f2f-5b59f93f9085] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:50:56.614095   74584 system_pods.go:89] "registry-creds-764b6fb674-ztjtq" [ae641ce9-b248-46a3-8e01-9d25e8d29825] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:50:56.614100   74584 system_pods.go:89] "registry-proxy-4p4n9" [73d260fc-8c68-439c-a460-208cdb29b271] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 09:50:56.614105   74584 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4kwxj" [e301a0c5-17dc-43be-9fd5-c14b76c1b92c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:50:56.614114   74584 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wzgp7" [4c770fa7-174c-43ab-ac63-635b19152843] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:50:56.614118   74584 system_pods.go:89] "storage-provisioner" [4c394064-33ff-4fd0-a4bc-afb948952ac6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:50:56.614126   74584 system_pods.go:126] duration metric: took 17.122448ms to wait for k8s-apps to be running ...
	I1101 09:50:56.614136   74584 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:50:56.614196   74584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:50:56.662305   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:50:56.676451   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:50:57.009640   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.27202291s)
	W1101 09:50:57.009684   74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:50:57.009709   74584 retry.go:31] will retry after 295.092784ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:50:57.009722   74584 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.357005393s)
	I1101 09:50:57.011440   74584 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 09:50:57.012826   74584 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1101 09:50:57.014068   74584 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1101 09:50:57.014084   74584 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1101 09:50:57.060410   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:50:57.092501   74584 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1101 09:50:57.092526   74584 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1101 09:50:57.163456   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:50:57.166739   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:50:57.235815   74584 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 09:50:57.235844   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1101 09:50:57.305656   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:50:57.336319   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 09:50:57.561645   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:50:57.662574   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:50:57.663877   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:50:58.063249   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:50:58.157346   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:50:58.162591   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:50:58.566038   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:50:58.574812   74584 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.96059055s)
	I1101 09:50:58.574848   74584 system_svc.go:56] duration metric: took 1.960707525s WaitForService to wait for kubelet
	I1101 09:50:58.574856   74584 kubeadm.go:587] duration metric: took 13.102108035s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:50:58.574874   74584 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:50:58.575108   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.73180936s)
	I1101 09:50:58.586405   74584 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 09:50:58.586436   74584 node_conditions.go:123] node cpu capacity is 2
	I1101 09:50:58.586457   74584 node_conditions.go:105] duration metric: took 11.577545ms to run NodePressure ...
	I1101 09:50:58.586472   74584 start.go:242] waiting for startup goroutines ...
	I1101 09:50:58.664635   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:50:58.665016   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:50:59.063972   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:50:59.170042   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:50:59.176798   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:50:59.577259   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:50:59.664063   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:50:59.665180   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:00.063306   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:00.173864   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:00.174338   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.868634982s)
	W1101 09:51:00.174389   74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:00.174423   74584 retry.go:31] will retry after 509.276592ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:00.174461   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.838092131s)
	I1101 09:51:00.175590   74584 addons.go:480] Verifying addon gcp-auth=true in "addons-086339"
	I1101 09:51:00.176082   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:00.177144   74584 out.go:179] * Verifying gcp-auth addon...
	I1101 09:51:00.179153   74584 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1101 09:51:00.185078   74584 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1101 09:51:00.185104   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:00.569905   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:00.666711   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:00.668288   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:00.684564   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:51:00.685802   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:01.058804   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:01.162413   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:01.162519   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:01.184967   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:01.561792   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:01.660578   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:01.660604   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:01.687510   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:02.048703   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.364096236s)
	W1101 09:51:02.048744   74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:02.048770   74584 retry.go:31] will retry after 922.440306ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:02.058033   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:02.156454   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:02.156517   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:02.184626   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:02.560632   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:02.663377   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:02.663392   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:02.682802   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:02.972204   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:51:03.066417   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:03.162498   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:03.164331   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:03.185238   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:03.558965   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:03.660685   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:03.662797   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:03.683857   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:03.988155   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.015906584s)
	W1101 09:51:03.988197   74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:03.988221   74584 retry.go:31] will retry after 1.512024934s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:04.059661   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:04.158989   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:04.159171   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:04.184262   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:04.559848   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:04.665219   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:04.666152   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:04.684684   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:05.059373   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:05.157706   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:05.158120   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:05.184998   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:05.500748   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:51:05.560240   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:05.659023   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:05.660031   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:05.684729   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:06.059474   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:06.157196   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:06.157311   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:06.182088   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 09:51:06.269741   74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:06.269786   74584 retry.go:31] will retry after 2.204116799s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:06.559209   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:06.657408   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:06.657492   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:06.683284   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:07.059744   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:07.160264   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:07.160549   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:07.183753   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:07.558791   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:07.658454   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:07.662675   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:07.684198   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:08.065874   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:08.160732   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:08.161495   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:08.182870   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:08.474158   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:51:08.564218   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:08.659007   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:08.661853   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:08.684365   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:09.062466   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:09.159228   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:09.159372   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:09.183927   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:09.561230   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:09.664415   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:09.666273   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:09.684865   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:09.700010   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.225813085s)
	W1101 09:51:09.700056   74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:09.700081   74584 retry.go:31] will retry after 3.484047661s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:10.059617   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:10.156799   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:10.156883   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:10.183999   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:10.560483   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:10.661603   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:10.661780   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:10.686351   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:11.081718   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:11.188353   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:11.188507   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:11.188624   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:11.558634   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:11.660662   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:11.663221   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:11.683762   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:12.059387   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:12.156602   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:12.156961   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:12.183069   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:12.558360   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:12.657779   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:12.659195   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:12.684167   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:13.059425   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:13.159273   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:13.159720   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:13.182662   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:13.184729   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:51:13.558837   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:13.659127   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:13.659431   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:13.682290   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 09:51:14.013627   74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:14.013674   74584 retry.go:31] will retry after 3.772853511s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:14.060473   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:14.168480   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:14.168525   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:14.195048   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:14.559885   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:14.655949   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:14.656674   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:14.682561   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:15.059773   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:15.158683   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:15.158997   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:15.185198   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:15.559183   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:15.657568   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:15.657667   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:15.683337   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:16.059611   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:16.156727   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:16.158488   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:16.182596   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:16.558923   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:16.656902   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:16.657753   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:16.683813   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:17.059799   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:17.157794   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:17.158058   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:17.183320   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:17.562511   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:17.661802   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:17.663610   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:17.683753   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:17.786898   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:51:18.062486   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:18.165903   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:18.166305   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:18.185036   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:18.563358   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:18.661780   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:18.664168   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:18.686501   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:19.062933   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:19.159993   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.373047606s)
	W1101 09:51:19.160054   74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:19.160090   74584 retry.go:31] will retry after 8.062833615s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:19.160265   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:19.161792   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:19.187129   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:19.562165   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:19.662490   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:19.662887   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:19.685224   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:20.062452   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:20.158649   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:20.158963   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:20.185553   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:20.560324   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:20.663470   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:20.664773   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:20.687217   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:21.058336   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:21.158067   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:21.158764   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:21.184179   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:21.562709   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:21.660636   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:21.661331   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:21.683251   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:22.058468   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:22.158449   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:22.161441   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:22.183647   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:22.559209   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:22.657596   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:22.658067   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:22.684022   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:23.060587   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:23.159313   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:23.160492   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:23.183233   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:23.577231   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:23.658412   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:23.661233   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:23.684740   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:24.059042   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:24.157394   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:24.158911   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:24.182864   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:24.559933   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:24.657638   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:24.661214   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:24.686127   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:25.059953   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:25.158151   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:25.160939   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:25.183657   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:25.565339   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:25.663990   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:25.664201   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:25.683465   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:26.059376   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:26.158991   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:26.159088   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:26.184884   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:26.559386   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:26.657922   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:26.660583   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:26.683688   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:27.058939   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:27.156101   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:27.156998   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:27.182909   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:27.224025   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:51:27.562477   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:27.660651   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:27.662259   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:27.681905   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:28.059984   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:28.160493   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:28.162286   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:28.186135   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 09:51:28.200979   74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:28.201029   74584 retry.go:31] will retry after 10.395817371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:28.558989   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:28.657430   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:28.660330   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:28.683885   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:29.061934   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:29.157765   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:29.157917   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:29.184278   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:29.560897   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:29.657774   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:29.657838   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:29.683106   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:30.059693   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:30.160732   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:30.166378   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:30.265635   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:30.558787   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:30.656060   74584 kapi.go:107] duration metric: took 35.503223323s to wait for kubernetes.io/minikube-addons=registry ...
	I1101 09:51:30.656373   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:30.682215   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:31.059187   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:31.157561   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:31.258067   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:31.560106   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:31.657305   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:31.683226   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:32.059058   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:32.158395   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:32.182943   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:32.559674   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:32.660135   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:32.684028   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:33.059220   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:33.159029   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:33.189054   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:33.699380   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:33.699471   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:33.700370   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:34.059307   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:34.158409   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:34.189459   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:34.558736   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:34.656864   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:34.682855   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:35.058847   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:35.156770   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:35.182411   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:35.559605   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:35.657060   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:35.682886   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:36.059230   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:36.158265   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:36.185067   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:36.562462   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:36.657785   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:36.684734   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:37.059270   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:37.156638   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:37.184172   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:37.558438   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:37.656955   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:37.684255   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:38.061827   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:38.157365   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:38.182685   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:38.560831   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:38.597843   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:51:38.656804   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:38.686009   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:39.061543   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:39.158425   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:39.183760   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:39.559306   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:39.657197   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:39.684893   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:39.748441   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.150549422s)
	W1101 09:51:39.748504   74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:39.748545   74584 retry.go:31] will retry after 20.354212059s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:40.091278   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:40.159135   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:40.189976   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:40.561293   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:40.657506   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:40.682812   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:41.059036   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:41.157077   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:41.183024   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:41.560657   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:41.662059   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:41.686139   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:42.059712   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:42.158078   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:42.184717   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:42.558428   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:42.657474   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:42.682401   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:43.061067   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:43.159023   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:43.182945   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:43.559721   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:43.658905   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:43.683665   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:44.059768   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:44.156686   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:44.182520   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:44.558486   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:44.659410   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:44.686714   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:45.059691   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:45.161012   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:45.186846   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:45.566991   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:45.661771   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:45.683563   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:46.061274   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:46.157945   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:46.184842   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:46.559462   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:46.659702   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:46.682680   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:47.058242   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:47.159894   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:47.185416   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:47.561755   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:47.660011   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:47.683518   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:48.061815   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:48.158606   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:48.186741   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:48.562551   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:48.660513   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:48.683374   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:49.061955   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:49.158516   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:49.182835   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:49.558347   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:49.660756   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:49.685651   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:50.059457   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:50.161169   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:50.185382   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:50.560490   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:50.667931   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:50.691744   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:51.060229   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:51.163272   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:51.185468   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:51.561847   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:51.657559   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:51.684472   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:52.065897   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:52.165405   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:52.184183   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:52.558429   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:52.659763   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:52.687124   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:53.060334   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:53.159793   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:53.260599   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:53.836679   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:53.844731   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:53.846382   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:54.061169   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:54.160164   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:54.184130   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:54.559624   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:54.660771   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:54.683387   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:55.060182   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:55.158098   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:55.184607   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:55.568135   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:55.666901   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:55.688352   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:56.061312   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:56.160289   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:56.183561   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:56.559442   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:56.666114   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:56.686070   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:57.059598   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:57.157253   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:57.184083   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:57.559370   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:57.657282   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:57.684369   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:58.059645   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:58.160950   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:58.183605   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:58.559980   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:58.660720   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:58.682723   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:59.061658   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:59.161368   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:59.186554   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:59.562493   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:59.658000   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:59.686396   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:00.059261   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:00.103310   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:52:00.158774   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:00.183231   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:00.562324   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:00.659611   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:00.682795   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:01.061408   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:01.158866   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:01.188200   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:01.344727   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.241365643s)
	W1101 09:52:01.344783   74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:52:01.344810   74584 retry.go:31] will retry after 24.70836809s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:52:01.558702   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:01.657288   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:01.683224   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:02.061177   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:02.158031   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:02.185134   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:02.559729   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:02.661884   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:02.684276   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:03.058102   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:03.159115   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:03.184840   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:03.559718   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:03.658993   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:03.682755   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:04.061600   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:04.157504   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:04.182206   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:04.558833   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:04.658122   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:04.690795   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:05.060282   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:05.159649   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:05.182512   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:05.558584   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:05.657372   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:05.682747   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:06.059347   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:06.156954   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:06.184088   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:06.559677   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:06.657737   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:06.683063   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:07.058922   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:07.156647   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:07.183210   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:07.559741   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:07.656366   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:07.684732   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:08.060305   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:08.161326   74584 kapi.go:107] duration metric: took 1m13.008685899s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1101 09:52:08.184485   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:08.563527   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:08.684225   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:09.062454   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:09.183134   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:09.559703   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:09.683034   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:10.059517   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:10.183595   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:10.559051   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:10.684292   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:11.060725   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:11.184057   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:11.560407   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:11.684061   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:12.059623   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:12.338951   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:12.563238   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:12.687086   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:13.065805   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:13.186970   74584 kapi.go:107] duration metric: took 1m13.007813603s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1101 09:52:13.188654   74584 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-086339 cluster.
	I1101 09:52:13.190102   74584 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1101 09:52:13.191551   74584 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1101 09:52:13.561959   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:14.059590   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:14.558397   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:15.059526   74584 kapi.go:107] duration metric: took 1m18.505070405s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1101 09:52:26.053439   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1101 09:52:26.787218   74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 09:52:26.787354   74584 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1101 09:52:26.789142   74584 out.go:179] * Enabled addons: default-storageclass, registry-creds, amd-gpu-device-plugin, storage-provisioner, nvidia-device-plugin, cloud-spanner, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1101 09:52:26.790527   74584 addons.go:515] duration metric: took 1m41.317758805s for enable addons: enabled=[default-storageclass registry-creds amd-gpu-device-plugin storage-provisioner nvidia-device-plugin cloud-spanner ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1101 09:52:26.790585   74584 start.go:247] waiting for cluster config update ...
	I1101 09:52:26.790606   74584 start.go:256] writing updated cluster config ...
	I1101 09:52:26.790869   74584 ssh_runner.go:195] Run: rm -f paused
	I1101 09:52:26.797220   74584 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:52:26.802135   74584 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vsbrs" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:52:26.807671   74584 pod_ready.go:94] pod "coredns-66bc5c9577-vsbrs" is "Ready"
	I1101 09:52:26.807696   74584 pod_ready.go:86] duration metric: took 5.533544ms for pod "coredns-66bc5c9577-vsbrs" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:52:26.809972   74584 pod_ready.go:83] waiting for pod "etcd-addons-086339" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:52:26.815396   74584 pod_ready.go:94] pod "etcd-addons-086339" is "Ready"
	I1101 09:52:26.815421   74584 pod_ready.go:86] duration metric: took 5.421578ms for pod "etcd-addons-086339" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:52:26.818352   74584 pod_ready.go:83] waiting for pod "kube-apiserver-addons-086339" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:52:26.823369   74584 pod_ready.go:94] pod "kube-apiserver-addons-086339" is "Ready"
	I1101 09:52:26.823403   74584 pod_ready.go:86] duration metric: took 5.02397ms for pod "kube-apiserver-addons-086339" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:52:26.825247   74584 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-086339" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:52:27.201328   74584 pod_ready.go:94] pod "kube-controller-manager-addons-086339" is "Ready"
	I1101 09:52:27.201355   74584 pod_ready.go:86] duration metric: took 376.08311ms for pod "kube-controller-manager-addons-086339" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:52:27.402263   74584 pod_ready.go:83] waiting for pod "kube-proxy-7fck9" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:52:27.802591   74584 pod_ready.go:94] pod "kube-proxy-7fck9" is "Ready"
	I1101 09:52:27.802625   74584 pod_ready.go:86] duration metric: took 400.328354ms for pod "kube-proxy-7fck9" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:52:28.002425   74584 pod_ready.go:83] waiting for pod "kube-scheduler-addons-086339" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:52:28.401943   74584 pod_ready.go:94] pod "kube-scheduler-addons-086339" is "Ready"
	I1101 09:52:28.401969   74584 pod_ready.go:86] duration metric: took 399.516912ms for pod "kube-scheduler-addons-086339" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:52:28.401979   74584 pod_ready.go:40] duration metric: took 1.604730154s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:52:28.446357   74584 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 09:52:28.448281   74584 out.go:179] * Done! kubectl is now configured to use "addons-086339" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 09:59:16 addons-086339 crio[826]: time="2025-11-01 09:59:16.784218599Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761991156784188002,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:511388,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f013a231-abe4-400e-9cf0-828960e4e75c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:59:16 addons-086339 crio[826]: time="2025-11-01 09:59:16.785041179Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=919288c0-46a9-41dd-ae74-e45a6ab405b8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:59:16 addons-086339 crio[826]: time="2025-11-01 09:59:16.785180190Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=919288c0-46a9-41dd-ae74-e45a6ab405b8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:59:16 addons-086339 crio[826]: time="2025-11-01 09:59:16.786729680Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d8f9ab035f10b883f89c331d67218f109856992b9b069efdae0a16a908bf656d,PodSandboxId:ecbb6e0269dbe5206ee40e41cf202e8a0f1fc8985220bca67dd2abcee664753f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761990753121450389,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bd0f0b90-ebd1-434e-86db-7717f59bb0b2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54e24bc9ad0bcf2346a54dba46c112594d3456f8e0851e42ae540839ab98ade7,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1761990734122207031,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28195b893a436d57c90ecac8b5fe73e5c1511f1415dc342be8113625a0b8d79a,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1761990729102734319,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60f64f1e1264248ca86d3e8ea17c90635c9d479311fe8d5ea622b661f0068bd6,PodSandboxId:b2e63f129e7cad5f03427260dc3589db4cecd4b45329bdb1e1023738a84b3985,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761990727410551183,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-g7dks,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a4165ee4-5d09-49d4-a0c1-f663b2084a0d,},Annotations:map[string]
string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:68f7cffce3b8170ddcfc4c830b594fbca731822c5a5c3c0fac39926a07fb45b5,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1761990719918712512,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be3e64d3efaef7385f69804094ceefebb9c929b31f339e7185c4ea6397ea2ccd,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1761990718492685647,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49ddab9d49bd9e1c4a3cbea6a7d518a89881a1bd73e967f274d22a36f1dfea5f,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1761990716814923539,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ada47aa6071a7356f4a8da278d3fc6aca1557ba5c9a0099793d41309eba1008,PodSandboxId:943b7b78175f24034b62c4341008ce9ed69d78515991694c43fd13b4f6ac1fc1,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1761990715391394480,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2c565f0-80a3-4b2d-a99b-edc1d7ae4fe2,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb32e4824fbd208045e8bad8dfedff1f23e927c32f1b62fae12233696df589bb,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name
:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1761990713966475127,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4b410307ca2339c52c3d12763e6b754600ea116c26c1df56bd5b04a1a68661d,PodSa
ndboxId:48e637e86e4493f303489a52457e2b59ba63b33cc608f38bb21f8e651a9e1571,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761990712169152493,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dw6sn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 51ccb987-b8f5-42f1-af70-9d22dd5ca2ec,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:441d236fd7e39d9b3a5cc6b3bb8961ce35e
c981238120611cdcc3cb61d7735b1,PodSandboxId:ed25ff910660155f553fa5ca020216deb811acfe62895c250c2f4116f4e42adf,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1761990712051266696,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50e03a30-f2e9-4ec1-ba85-6da2654030c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bde07abd1
e0966a5ed194175cec37ddb2ab38d4771b5729d05571ed5072606a8,PodSandboxId:28091fc92ecf572bc5fbe283ff9cafe33ce0a67c0edcc4cbdfa901ef366642c5,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761990709762301697,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-4kwxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e301a0c5-17dc-43be-9fd5-c14b76c1b92c,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:764b375ef3791c087e6ad4b248126f0c7c98e6065f6bd3c282044dcc212ac1f4,PodSandboxId:1c83f726dda755d3ed283799c973eeabdf1da173f6f6ce420a3d047efb307a42,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761990709662174283,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-d7qkm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0111b6b5-409d-4b18-a391-db0a0fbe7882,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a29c1fc2c879d0e534a8cebc61baf83da09a1b3e98b2972f576c64cacf35d44,PodSandboxId:cd9903f7fe60f66bc1eee002e6b25f4b8358953184ed7bdceb69ca35d37af467,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761990708178896364,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-wzgp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c770fa7-174c-43ab-ac63-635b19152843,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ccff636c81da778123aaba73ca1c6a96114c3d9b455724fc184ea7051b61a16,PodSandboxId:ae1c1b106a1ce6fe7752079dd99dd3da08ea5c8417f73c7d2db66281343dd8bc,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761990706331554116,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-p2brt,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 7d9684ff-4d35-4cab-b655-c3fcbbfaa552,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{
\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5d4912957560b7536a6c330e413b78d8074dab0b202ba22a5bc327a0cf5f8a2,PodSandboxId:8aac4234df2d12e07c37fb39a1595bd340e7adc1fe2162b211b453851a56a63d,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761990685537208680,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e328fd3e-a381-414d-b
a99-1aa6f7f40585,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:323c0222f1b7214ab99931e32355894f2f03f8261792abe4a4d2bb34fcd2969f,PodSandboxId:1c7e949564af5bc80420dc3808d3f2087aa2f9b293627ed59b78902667c1bcef,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761990655935157577,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.
kubernetes.pod.name: amd-gpu-device-plugin-lr4lw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee1e3ae-5d43-4b43-a348-0e04ec066093,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6de230bb7ebf7771aad2c97275ddf43f297877d5aa072670a0a1ea8eb9c2d60f,PodSandboxId:4fbf69bbad2cf19e93c7344344fcc06babe9936500aa5bef352fd41fd55b694f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761990655486179158,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kube
rnetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c394064-33ff-4fd0-a4bc-afb948952ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a27cff89c3381e7c393bcaa8835fcfc8181cd61e7b6ab7d332528c6747943387,PodSandboxId:d7fa84c405309fb1e772e6c659810175defff8a22e42a89197e6b5a5597a8c84,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761990646997219064,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-
vsbrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a65dae-82f4-4f33-b460-fa45a39b3342,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:260edbddb00ef801ed1131f918ebc64902a7e77ccf06a4ed7c432254423d7b66,PodSandboxId:089a55380f09729b05eee5a252927b0c79db01dc718d6007a08b5689f2ce71c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840e
c8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761990646303679370,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7fck9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a834adcc-b0ec-4cad-8944-bea90a627787,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86586375e770d45d0d93bbb47a93539a88b9dc7cdd8db120d1c9301cf9724986,PodSandboxId:47c204cffec810f2b063e0da736cf9f9a808714639f57abfa3a16da3187f96a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe163
79ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761990633442334233,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f64ac66b49c7412b8fa37d2ea6025670,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:195a44f107dbd39942c739e79c57d0cb5365ba4acc9d6617ae02ff7fabb66667,PodSandboxId:0780152663a4bf99a793fec09c7dd2ddf6dc4673b89381ad0a9d3bb4248095e3,Metadata:&ContainerMetadata{N
ame:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761990633398671407,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c80f54e7a2ffeed9d816c83a1643dee4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a6a05d5c3b322ab0daa8e0142efedb8b2cd9709809a366e3b02c33252f097e2,PodSandboxId:4303a653e0e77a28ad08
502f1313df5bfebd24a17c8e4816f84db5f2d930a571,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761990633395979421,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ff8e16ad24795a1ca532e7aa16809a1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminatio
nGracePeriod: 30,},},&Container{Id:e1c9ad62c824f0689e272e9d02d572351e59f9325259ac81e62d0597c48762a5,PodSandboxId:25028e524345d4f110f0887066fc1114742e907055b01a9fcf2cb85f6e770b0b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761990633414977195,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 390b611a3c7c50f2133aad0ea70b2107,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=919288c0-46a9-41dd-ae74-e45a6ab405b8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:59:16 addons-086339 crio[826]: time="2025-11-01 09:59:16.836774750Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e1325ccf-549b-4afd-a7a0-3ba3cf230257 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:59:16 addons-086339 crio[826]: time="2025-11-01 09:59:16.836974342Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e1325ccf-549b-4afd-a7a0-3ba3cf230257 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:59:16 addons-086339 crio[826]: time="2025-11-01 09:59:16.838794311Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=51f0e5e1-246d-41ea-a6a5-69a722a4d8d8 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:59:16 addons-086339 crio[826]: time="2025-11-01 09:59:16.840040869Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761991156840011856,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:511388,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=51f0e5e1-246d-41ea-a6a5-69a722a4d8d8 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:59:16 addons-086339 crio[826]: time="2025-11-01 09:59:16.840641374Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e67fc5f7-809d-43db-b260-a49382d1af1e name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:59:16 addons-086339 crio[826]: time="2025-11-01 09:59:16.840714402Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e67fc5f7-809d-43db-b260-a49382d1af1e name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:59:16 addons-086339 crio[826]: time="2025-11-01 09:59:16.842026613Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d8f9ab035f10b883f89c331d67218f109856992b9b069efdae0a16a908bf656d,PodSandboxId:ecbb6e0269dbe5206ee40e41cf202e8a0f1fc8985220bca67dd2abcee664753f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761990753121450389,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bd0f0b90-ebd1-434e-86db-7717f59bb0b2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54e24bc9ad0bcf2346a54dba46c112594d3456f8e0851e42ae540839ab98ade7,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1761990734122207031,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28195b893a436d57c90ecac8b5fe73e5c1511f1415dc342be8113625a0b8d79a,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1761990729102734319,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60f64f1e1264248ca86d3e8ea17c90635c9d479311fe8d5ea622b661f0068bd6,PodSandboxId:b2e63f129e7cad5f03427260dc3589db4cecd4b45329bdb1e1023738a84b3985,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761990727410551183,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-g7dks,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a4165ee4-5d09-49d4-a0c1-f663b2084a0d,},Annotations:map[string]
string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:68f7cffce3b8170ddcfc4c830b594fbca731822c5a5c3c0fac39926a07fb45b5,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1761990719918712512,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be3e64d3efaef7385f69804094ceefebb9c929b31f339e7185c4ea6397ea2ccd,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1761990718492685647,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49ddab9d49bd9e1c4a3cbea6a7d518a89881a1bd73e967f274d22a36f1dfea5f,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1761990716814923539,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ada47aa6071a7356f4a8da278d3fc6aca1557ba5c9a0099793d41309eba1008,PodSandboxId:943b7b78175f24034b62c4341008ce9ed69d78515991694c43fd13b4f6ac1fc1,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1761990715391394480,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2c565f0-80a3-4b2d-a99b-edc1d7ae4fe2,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb32e4824fbd208045e8bad8dfedff1f23e927c32f1b62fae12233696df589bb,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name
:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1761990713966475127,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4b410307ca2339c52c3d12763e6b754600ea116c26c1df56bd5b04a1a68661d,PodSa
ndboxId:48e637e86e4493f303489a52457e2b59ba63b33cc608f38bb21f8e651a9e1571,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761990712169152493,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dw6sn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 51ccb987-b8f5-42f1-af70-9d22dd5ca2ec,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:441d236fd7e39d9b3a5cc6b3bb8961ce35e
c981238120611cdcc3cb61d7735b1,PodSandboxId:ed25ff910660155f553fa5ca020216deb811acfe62895c250c2f4116f4e42adf,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1761990712051266696,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50e03a30-f2e9-4ec1-ba85-6da2654030c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bde07abd1
e0966a5ed194175cec37ddb2ab38d4771b5729d05571ed5072606a8,PodSandboxId:28091fc92ecf572bc5fbe283ff9cafe33ce0a67c0edcc4cbdfa901ef366642c5,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761990709762301697,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-4kwxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e301a0c5-17dc-43be-9fd5-c14b76c1b92c,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:764b375ef3791c087e6ad4b248126f0c7c98e6065f6bd3c282044dcc212ac1f4,PodSandboxId:1c83f726dda755d3ed283799c973eeabdf1da173f6f6ce420a3d047efb307a42,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761990709662174283,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-d7qkm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0111b6b5-409d-4b18-a391-db0a0fbe7882,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a29c1fc2c879d0e534a8cebc61baf83da09a1b3e98b2972f576c64cacf35d44,PodSandboxId:cd9903f7fe60f66bc1eee002e6b25f4b8358953184ed7bdceb69ca35d37af467,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761990708178896364,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-wzgp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c770fa7-174c-43ab-ac63-635b19152843,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ccff636c81da778123aaba73ca1c6a96114c3d9b455724fc184ea7051b61a16,PodSandboxId:ae1c1b106a1ce6fe7752079dd99dd3da08ea5c8417f73c7d2db66281343dd8bc,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761990706331554116,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-p2brt,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 7d9684ff-4d35-4cab-b655-c3fcbbfaa552,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{
\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5d4912957560b7536a6c330e413b78d8074dab0b202ba22a5bc327a0cf5f8a2,PodSandboxId:8aac4234df2d12e07c37fb39a1595bd340e7adc1fe2162b211b453851a56a63d,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761990685537208680,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e328fd3e-a381-414d-b
a99-1aa6f7f40585,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:323c0222f1b7214ab99931e32355894f2f03f8261792abe4a4d2bb34fcd2969f,PodSandboxId:1c7e949564af5bc80420dc3808d3f2087aa2f9b293627ed59b78902667c1bcef,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761990655935157577,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.
kubernetes.pod.name: amd-gpu-device-plugin-lr4lw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee1e3ae-5d43-4b43-a348-0e04ec066093,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6de230bb7ebf7771aad2c97275ddf43f297877d5aa072670a0a1ea8eb9c2d60f,PodSandboxId:4fbf69bbad2cf19e93c7344344fcc06babe9936500aa5bef352fd41fd55b694f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761990655486179158,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kube
rnetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c394064-33ff-4fd0-a4bc-afb948952ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a27cff89c3381e7c393bcaa8835fcfc8181cd61e7b6ab7d332528c6747943387,PodSandboxId:d7fa84c405309fb1e772e6c659810175defff8a22e42a89197e6b5a5597a8c84,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761990646997219064,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-
vsbrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a65dae-82f4-4f33-b460-fa45a39b3342,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:260edbddb00ef801ed1131f918ebc64902a7e77ccf06a4ed7c432254423d7b66,PodSandboxId:089a55380f09729b05eee5a252927b0c79db01dc718d6007a08b5689f2ce71c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840e
c8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761990646303679370,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7fck9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a834adcc-b0ec-4cad-8944-bea90a627787,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86586375e770d45d0d93bbb47a93539a88b9dc7cdd8db120d1c9301cf9724986,PodSandboxId:47c204cffec810f2b063e0da736cf9f9a808714639f57abfa3a16da3187f96a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe163
79ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761990633442334233,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f64ac66b49c7412b8fa37d2ea6025670,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:195a44f107dbd39942c739e79c57d0cb5365ba4acc9d6617ae02ff7fabb66667,PodSandboxId:0780152663a4bf99a793fec09c7dd2ddf6dc4673b89381ad0a9d3bb4248095e3,Metadata:&ContainerMetadata{N
ame:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761990633398671407,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c80f54e7a2ffeed9d816c83a1643dee4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a6a05d5c3b322ab0daa8e0142efedb8b2cd9709809a366e3b02c33252f097e2,PodSandboxId:4303a653e0e77a28ad08
502f1313df5bfebd24a17c8e4816f84db5f2d930a571,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761990633395979421,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ff8e16ad24795a1ca532e7aa16809a1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminatio
nGracePeriod: 30,},},&Container{Id:e1c9ad62c824f0689e272e9d02d572351e59f9325259ac81e62d0597c48762a5,PodSandboxId:25028e524345d4f110f0887066fc1114742e907055b01a9fcf2cb85f6e770b0b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761990633414977195,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 390b611a3c7c50f2133aad0ea70b2107,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e67fc5f7-809d-43db-b260-a49382d1af1e name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:59:16 addons-086339 crio[826]: time="2025-11-01 09:59:16.887455463Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e8bda77b-33b1-464e-83d1-c50ba1fa93d2 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:59:16 addons-086339 crio[826]: time="2025-11-01 09:59:16.887537215Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e8bda77b-33b1-464e-83d1-c50ba1fa93d2 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:59:16 addons-086339 crio[826]: time="2025-11-01 09:59:16.889650015Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=54d776f5-ccf5-4e89-bc0d-8c4f2b8f4b9c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:59:16 addons-086339 crio[826]: time="2025-11-01 09:59:16.892122931Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761991156892094487,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:511388,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=54d776f5-ccf5-4e89-bc0d-8c4f2b8f4b9c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:59:16 addons-086339 crio[826]: time="2025-11-01 09:59:16.893383913Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c51c36a1-d13c-45fc-bf6a-e530b769eedf name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:59:16 addons-086339 crio[826]: time="2025-11-01 09:59:16.893444232Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c51c36a1-d13c-45fc-bf6a-e530b769eedf name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:59:16 addons-086339 crio[826]: time="2025-11-01 09:59:16.893995790Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d8f9ab035f10b883f89c331d67218f109856992b9b069efdae0a16a908bf656d,PodSandboxId:ecbb6e0269dbe5206ee40e41cf202e8a0f1fc8985220bca67dd2abcee664753f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761990753121450389,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bd0f0b90-ebd1-434e-86db-7717f59bb0b2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54e24bc9ad0bcf2346a54dba46c112594d3456f8e0851e42ae540839ab98ade7,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1761990734122207031,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28195b893a436d57c90ecac8b5fe73e5c1511f1415dc342be8113625a0b8d79a,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1761990729102734319,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60f64f1e1264248ca86d3e8ea17c90635c9d479311fe8d5ea622b661f0068bd6,PodSandboxId:b2e63f129e7cad5f03427260dc3589db4cecd4b45329bdb1e1023738a84b3985,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761990727410551183,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-g7dks,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a4165ee4-5d09-49d4-a0c1-f663b2084a0d,},Annotations:map[string]
string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:68f7cffce3b8170ddcfc4c830b594fbca731822c5a5c3c0fac39926a07fb45b5,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1761990719918712512,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be3e64d3efaef7385f69804094ceefebb9c929b31f339e7185c4ea6397ea2ccd,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1761990718492685647,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49ddab9d49bd9e1c4a3cbea6a7d518a89881a1bd73e967f274d22a36f1dfea5f,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1761990716814923539,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ada47aa6071a7356f4a8da278d3fc6aca1557ba5c9a0099793d41309eba1008,PodSandboxId:943b7b78175f24034b62c4341008ce9ed69d78515991694c43fd13b4f6ac1fc1,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1761990715391394480,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2c565f0-80a3-4b2d-a99b-edc1d7ae4fe2,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb32e4824fbd208045e8bad8dfedff1f23e927c32f1b62fae12233696df589bb,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name
:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1761990713966475127,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4b410307ca2339c52c3d12763e6b754600ea116c26c1df56bd5b04a1a68661d,PodSa
ndboxId:48e637e86e4493f303489a52457e2b59ba63b33cc608f38bb21f8e651a9e1571,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761990712169152493,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dw6sn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 51ccb987-b8f5-42f1-af70-9d22dd5ca2ec,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:441d236fd7e39d9b3a5cc6b3bb8961ce35e
c981238120611cdcc3cb61d7735b1,PodSandboxId:ed25ff910660155f553fa5ca020216deb811acfe62895c250c2f4116f4e42adf,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1761990712051266696,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50e03a30-f2e9-4ec1-ba85-6da2654030c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bde07abd1
e0966a5ed194175cec37ddb2ab38d4771b5729d05571ed5072606a8,PodSandboxId:28091fc92ecf572bc5fbe283ff9cafe33ce0a67c0edcc4cbdfa901ef366642c5,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761990709762301697,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-4kwxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e301a0c5-17dc-43be-9fd5-c14b76c1b92c,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:764b375ef3791c087e6ad4b248126f0c7c98e6065f6bd3c282044dcc212ac1f4,PodSandboxId:1c83f726dda755d3ed283799c973eeabdf1da173f6f6ce420a3d047efb307a42,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761990709662174283,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-d7qkm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0111b6b5-409d-4b18-a391-db0a0fbe7882,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a29c1fc2c879d0e534a8cebc61baf83da09a1b3e98b2972f576c64cacf35d44,PodSandboxId:cd9903f7fe60f66bc1eee002e6b25f4b8358953184ed7bdceb69ca35d37af467,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761990708178896364,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-wzgp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c770fa7-174c-43ab-ac63-635b19152843,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ccff636c81da778123aaba73ca1c6a96114c3d9b455724fc184ea7051b61a16,PodSandboxId:ae1c1b106a1ce6fe7752079dd99dd3da08ea5c8417f73c7d2db66281343dd8bc,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761990706331554116,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-p2brt,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 7d9684ff-4d35-4cab-b655-c3fcbbfaa552,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{
\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5d4912957560b7536a6c330e413b78d8074dab0b202ba22a5bc327a0cf5f8a2,PodSandboxId:8aac4234df2d12e07c37fb39a1595bd340e7adc1fe2162b211b453851a56a63d,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761990685537208680,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e328fd3e-a381-414d-b
a99-1aa6f7f40585,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:323c0222f1b7214ab99931e32355894f2f03f8261792abe4a4d2bb34fcd2969f,PodSandboxId:1c7e949564af5bc80420dc3808d3f2087aa2f9b293627ed59b78902667c1bcef,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761990655935157577,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.
kubernetes.pod.name: amd-gpu-device-plugin-lr4lw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee1e3ae-5d43-4b43-a348-0e04ec066093,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6de230bb7ebf7771aad2c97275ddf43f297877d5aa072670a0a1ea8eb9c2d60f,PodSandboxId:4fbf69bbad2cf19e93c7344344fcc06babe9936500aa5bef352fd41fd55b694f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761990655486179158,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kube
rnetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c394064-33ff-4fd0-a4bc-afb948952ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a27cff89c3381e7c393bcaa8835fcfc8181cd61e7b6ab7d332528c6747943387,PodSandboxId:d7fa84c405309fb1e772e6c659810175defff8a22e42a89197e6b5a5597a8c84,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761990646997219064,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-
vsbrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a65dae-82f4-4f33-b460-fa45a39b3342,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:260edbddb00ef801ed1131f918ebc64902a7e77ccf06a4ed7c432254423d7b66,PodSandboxId:089a55380f09729b05eee5a252927b0c79db01dc718d6007a08b5689f2ce71c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840e
c8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761990646303679370,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7fck9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a834adcc-b0ec-4cad-8944-bea90a627787,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86586375e770d45d0d93bbb47a93539a88b9dc7cdd8db120d1c9301cf9724986,PodSandboxId:47c204cffec810f2b063e0da736cf9f9a808714639f57abfa3a16da3187f96a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe163
79ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761990633442334233,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f64ac66b49c7412b8fa37d2ea6025670,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:195a44f107dbd39942c739e79c57d0cb5365ba4acc9d6617ae02ff7fabb66667,PodSandboxId:0780152663a4bf99a793fec09c7dd2ddf6dc4673b89381ad0a9d3bb4248095e3,Metadata:&ContainerMetadata{N
ame:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761990633398671407,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c80f54e7a2ffeed9d816c83a1643dee4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a6a05d5c3b322ab0daa8e0142efedb8b2cd9709809a366e3b02c33252f097e2,PodSandboxId:4303a653e0e77a28ad08
502f1313df5bfebd24a17c8e4816f84db5f2d930a571,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761990633395979421,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ff8e16ad24795a1ca532e7aa16809a1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminatio
nGracePeriod: 30,},},&Container{Id:e1c9ad62c824f0689e272e9d02d572351e59f9325259ac81e62d0597c48762a5,PodSandboxId:25028e524345d4f110f0887066fc1114742e907055b01a9fcf2cb85f6e770b0b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761990633414977195,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 390b611a3c7c50f2133aad0ea70b2107,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c51c36a1-d13c-45fc-bf6a-e530b769eedf name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:59:16 addons-086339 crio[826]: time="2025-11-01 09:59:16.935098511Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7e5306b4-2097-4829-bf8d-a091088cf6d6 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:59:16 addons-086339 crio[826]: time="2025-11-01 09:59:16.935189418Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7e5306b4-2097-4829-bf8d-a091088cf6d6 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:59:16 addons-086339 crio[826]: time="2025-11-01 09:59:16.938349989Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7df4215f-8f05-45ff-9e39-e7bc4a59b7ad name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:59:16 addons-086339 crio[826]: time="2025-11-01 09:59:16.940491200Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761991156940460406,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:511388,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7df4215f-8f05-45ff-9e39-e7bc4a59b7ad name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:59:16 addons-086339 crio[826]: time="2025-11-01 09:59:16.941554119Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d38df44b-a4f8-46ca-95ea-80b081b8f514 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:59:16 addons-086339 crio[826]: time="2025-11-01 09:59:16.941989862Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d38df44b-a4f8-46ca-95ea-80b081b8f514 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:59:16 addons-086339 crio[826]: time="2025-11-01 09:59:16.943544525Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d8f9ab035f10b883f89c331d67218f109856992b9b069efdae0a16a908bf656d,PodSandboxId:ecbb6e0269dbe5206ee40e41cf202e8a0f1fc8985220bca67dd2abcee664753f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761990753121450389,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bd0f0b90-ebd1-434e-86db-7717f59bb0b2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54e24bc9ad0bcf2346a54dba46c112594d3456f8e0851e42ae540839ab98ade7,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1761990734122207031,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28195b893a436d57c90ecac8b5fe73e5c1511f1415dc342be8113625a0b8d79a,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1761990729102734319,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60f64f1e1264248ca86d3e8ea17c90635c9d479311fe8d5ea622b661f0068bd6,PodSandboxId:b2e63f129e7cad5f03427260dc3589db4cecd4b45329bdb1e1023738a84b3985,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761990727410551183,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-g7dks,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a4165ee4-5d09-49d4-a0c1-f663b2084a0d,},Annotations:map[string]
string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:68f7cffce3b8170ddcfc4c830b594fbca731822c5a5c3c0fac39926a07fb45b5,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1761990719918712512,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be3e64d3efaef7385f69804094ceefebb9c929b31f339e7185c4ea6397ea2ccd,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1761990718492685647,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49ddab9d49bd9e1c4a3cbea6a7d518a89881a1bd73e967f274d22a36f1dfea5f,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1761990716814923539,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ada47aa6071a7356f4a8da278d3fc6aca1557ba5c9a0099793d41309eba1008,PodSandboxId:943b7b78175f24034b62c4341008ce9ed69d78515991694c43fd13b4f6ac1fc1,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1761990715391394480,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2c565f0-80a3-4b2d-a99b-edc1d7ae4fe2,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb32e4824fbd208045e8bad8dfedff1f23e927c32f1b62fae12233696df589bb,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name
:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1761990713966475127,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4b410307ca2339c52c3d12763e6b754600ea116c26c1df56bd5b04a1a68661d,PodSa
ndboxId:48e637e86e4493f303489a52457e2b59ba63b33cc608f38bb21f8e651a9e1571,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761990712169152493,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dw6sn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 51ccb987-b8f5-42f1-af70-9d22dd5ca2ec,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:441d236fd7e39d9b3a5cc6b3bb8961ce35e
c981238120611cdcc3cb61d7735b1,PodSandboxId:ed25ff910660155f553fa5ca020216deb811acfe62895c250c2f4116f4e42adf,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1761990712051266696,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50e03a30-f2e9-4ec1-ba85-6da2654030c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bde07abd1
e0966a5ed194175cec37ddb2ab38d4771b5729d05571ed5072606a8,PodSandboxId:28091fc92ecf572bc5fbe283ff9cafe33ce0a67c0edcc4cbdfa901ef366642c5,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761990709762301697,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-4kwxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e301a0c5-17dc-43be-9fd5-c14b76c1b92c,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:764b375ef3791c087e6ad4b248126f0c7c98e6065f6bd3c282044dcc212ac1f4,PodSandboxId:1c83f726dda755d3ed283799c973eeabdf1da173f6f6ce420a3d047efb307a42,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761990709662174283,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-d7qkm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0111b6b5-409d-4b18-a391-db0a0fbe7882,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a29c1fc2c879d0e534a8cebc61baf83da09a1b3e98b2972f576c64cacf35d44,PodSandboxId:cd9903f7fe60f66bc1eee002e6b25f4b8358953184ed7bdceb69ca35d37af467,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761990708178896364,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-wzgp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c770fa7-174c-43ab-ac63-635b19152843,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ccff636c81da778123aaba73ca1c6a96114c3d9b455724fc184ea7051b61a16,PodSandboxId:ae1c1b106a1ce6fe7752079dd99dd3da08ea5c8417f73c7d2db66281343dd8bc,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761990706331554116,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-p2brt,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 7d9684ff-4d35-4cab-b655-c3fcbbfaa552,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{
\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5d4912957560b7536a6c330e413b78d8074dab0b202ba22a5bc327a0cf5f8a2,PodSandboxId:8aac4234df2d12e07c37fb39a1595bd340e7adc1fe2162b211b453851a56a63d,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761990685537208680,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e328fd3e-a381-414d-b
a99-1aa6f7f40585,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:323c0222f1b7214ab99931e32355894f2f03f8261792abe4a4d2bb34fcd2969f,PodSandboxId:1c7e949564af5bc80420dc3808d3f2087aa2f9b293627ed59b78902667c1bcef,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761990655935157577,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.
kubernetes.pod.name: amd-gpu-device-plugin-lr4lw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee1e3ae-5d43-4b43-a348-0e04ec066093,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6de230bb7ebf7771aad2c97275ddf43f297877d5aa072670a0a1ea8eb9c2d60f,PodSandboxId:4fbf69bbad2cf19e93c7344344fcc06babe9936500aa5bef352fd41fd55b694f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761990655486179158,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kube
rnetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c394064-33ff-4fd0-a4bc-afb948952ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a27cff89c3381e7c393bcaa8835fcfc8181cd61e7b6ab7d332528c6747943387,PodSandboxId:d7fa84c405309fb1e772e6c659810175defff8a22e42a89197e6b5a5597a8c84,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761990646997219064,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-
vsbrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a65dae-82f4-4f33-b460-fa45a39b3342,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:260edbddb00ef801ed1131f918ebc64902a7e77ccf06a4ed7c432254423d7b66,PodSandboxId:089a55380f09729b05eee5a252927b0c79db01dc718d6007a08b5689f2ce71c5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840e
c8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761990646303679370,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7fck9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a834adcc-b0ec-4cad-8944-bea90a627787,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86586375e770d45d0d93bbb47a93539a88b9dc7cdd8db120d1c9301cf9724986,PodSandboxId:47c204cffec810f2b063e0da736cf9f9a808714639f57abfa3a16da3187f96a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe163
79ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761990633442334233,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f64ac66b49c7412b8fa37d2ea6025670,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:195a44f107dbd39942c739e79c57d0cb5365ba4acc9d6617ae02ff7fabb66667,PodSandboxId:0780152663a4bf99a793fec09c7dd2ddf6dc4673b89381ad0a9d3bb4248095e3,Metadata:&ContainerMetadata{N
ame:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761990633398671407,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c80f54e7a2ffeed9d816c83a1643dee4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a6a05d5c3b322ab0daa8e0142efedb8b2cd9709809a366e3b02c33252f097e2,PodSandboxId:4303a653e0e77a28ad08
502f1313df5bfebd24a17c8e4816f84db5f2d930a571,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761990633395979421,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ff8e16ad24795a1ca532e7aa16809a1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminatio
nGracePeriod: 30,},},&Container{Id:e1c9ad62c824f0689e272e9d02d572351e59f9325259ac81e62d0597c48762a5,PodSandboxId:25028e524345d4f110f0887066fc1114742e907055b01a9fcf2cb85f6e770b0b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761990633414977195,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 390b611a3c7c50f2133aad0ea70b2107,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d38df44b-a4f8-46ca-95ea-80b081b8f514 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	d8f9ab035f10b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          6 minutes ago       Running             busybox                                  0                   ecbb6e0269dbe       busybox
	54e24bc9ad0bc       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          7 minutes ago       Running             csi-snapshotter                          0                   47bd89a6a83f4       csi-hostpathplugin-z7vjp
	28195b893a436       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          7 minutes ago       Running             csi-provisioner                          0                   47bd89a6a83f4       csi-hostpathplugin-z7vjp
	60f64f1e12642       registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd                             7 minutes ago       Running             controller                               0                   b2e63f129e7ca       ingress-nginx-controller-675c5ddd98-g7dks
	68f7cffce3b81       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            7 minutes ago       Running             liveness-probe                           0                   47bd89a6a83f4       csi-hostpathplugin-z7vjp
	be3e64d3efaef       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           7 minutes ago       Running             hostpath                                 0                   47bd89a6a83f4       csi-hostpathplugin-z7vjp
	49ddab9d49bd9       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                7 minutes ago       Running             node-driver-registrar                    0                   47bd89a6a83f4       csi-hostpathplugin-z7vjp
	9ada47aa6071a       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              7 minutes ago       Running             csi-resizer                              0                   943b7b78175f2       csi-hostpath-resizer-0
	bb32e4824fbd2       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   7 minutes ago       Running             csi-external-health-monitor-controller   0                   47bd89a6a83f4       csi-hostpathplugin-z7vjp
	a4b410307ca23       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39                   7 minutes ago       Exited              patch                                    0                   48e637e86e449       ingress-nginx-admission-patch-dw6sn
	441d236fd7e39       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             7 minutes ago       Running             csi-attacher                             0                   ed25ff9106601       csi-hostpath-attacher-0
	bde07abd1e096       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      7 minutes ago       Running             volume-snapshot-controller               0                   28091fc92ecf5       snapshot-controller-7d9fbc56b8-4kwxj
	764b375ef3791       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39                   7 minutes ago       Exited              create                                   0                   1c83f726dda75       ingress-nginx-admission-create-d7qkm
	9a29c1fc2c879       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      7 minutes ago       Running             volume-snapshot-controller               0                   cd9903f7fe60f       snapshot-controller-7d9fbc56b8-wzgp7
	6ccff636c81da       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            7 minutes ago       Running             gadget                                   0                   ae1c1b106a1ce       gadget-p2brt
	e5d4912957560       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               7 minutes ago       Running             minikube-ingress-dns                     0                   8aac4234df2d1       kube-ingress-dns-minikube
	323c0222f1b72       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     8 minutes ago       Running             amd-gpu-device-plugin                    0                   1c7e949564af5       amd-gpu-device-plugin-lr4lw
	6de230bb7ebf7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             8 minutes ago       Running             storage-provisioner                      0                   4fbf69bbad2cf       storage-provisioner
	a27cff89c3381       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             8 minutes ago       Running             coredns                                  0                   d7fa84c405309       coredns-66bc5c9577-vsbrs
	260edbddb00ef       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             8 minutes ago       Running             kube-proxy                               0                   089a55380f097       kube-proxy-7fck9
	86586375e770d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             8 minutes ago       Running             kube-scheduler                           0                   47c204cffec81       kube-scheduler-addons-086339
	e1c9ad62c824f       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             8 minutes ago       Running             kube-apiserver                           0                   25028e524345d       kube-apiserver-addons-086339
	195a44f107dbd       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             8 minutes ago       Running             etcd                                     0                   0780152663a4b       etcd-addons-086339
	9a6a05d5c3b32       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             8 minutes ago       Running             kube-controller-manager                  0                   4303a653e0e77       kube-controller-manager-addons-086339
	
	
	==> coredns [a27cff89c3381e7c393bcaa8835fcfc8181cd61e7b6ab7d332528c6747943387] <==
	[INFO] 10.244.0.8:46984 - 64533 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000141653s
	[INFO] 10.244.0.8:46984 - 26572 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000122796s
	[INFO] 10.244.0.8:46984 - 13929 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000122328s
	[INFO] 10.244.0.8:46984 - 50125 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000111517s
	[INFO] 10.244.0.8:46984 - 28460 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000076823s
	[INFO] 10.244.0.8:46984 - 37293 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000357436s
	[INFO] 10.244.0.8:46984 - 35576 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000074841s
	[INFO] 10.244.0.8:47197 - 56588 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000121682s
	[INFO] 10.244.0.8:47197 - 56863 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000074546s
	[INFO] 10.244.0.8:55042 - 52218 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00018264s
	[INFO] 10.244.0.8:55042 - 52511 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000079606s
	[INFO] 10.244.0.8:46708 - 46443 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000066375s
	[INFO] 10.244.0.8:46708 - 46765 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000066983s
	[INFO] 10.244.0.8:59900 - 32652 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000207279s
	[INFO] 10.244.0.8:59900 - 32872 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000078309s
	[INFO] 10.244.0.23:50316 - 52228 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001915683s
	[INFO] 10.244.0.23:47612 - 63606 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.002354882s
	[INFO] 10.244.0.23:53727 - 34179 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000138277s
	[INFO] 10.244.0.23:43312 - 5456 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000125706s
	[INFO] 10.244.0.23:34742 - 50233 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000105505s
	[INFO] 10.244.0.23:42706 - 32458 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000148964s
	[INFO] 10.244.0.23:47433 - 16041 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00404755s
	[INFO] 10.244.0.23:43796 - 36348 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.003930977s
	[INFO] 10.244.0.28:59610 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000657818s
	[INFO] 10.244.0.28:58478 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000385159s
	
	
	==> describe nodes <==
	Name:               addons-086339
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-086339
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=addons-086339
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_50_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-086339
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-086339"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:50:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-086339
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:59:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:54:15 +0000   Sat, 01 Nov 2025 09:50:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:54:15 +0000   Sat, 01 Nov 2025 09:50:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:54:15 +0000   Sat, 01 Nov 2025 09:50:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:54:15 +0000   Sat, 01 Nov 2025 09:50:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.58
	  Hostname:    addons-086339
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 a0be334a213a4e9abad36168cb6c4d93
	  System UUID:                a0be334a-213a-4e9a-bad3-6168cb6c4d93
	  Boot ID:                    f5f61220-a436-4e42-9f0c-21fc51d403ab
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m48s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m6s
	  default                     task-pv-pod                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  default                     test-local-path                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m29s
	  gadget                      gadget-p2brt                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m24s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-g7dks    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         8m23s
	  kube-system                 amd-gpu-device-plugin-lr4lw                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m29s
	  kube-system                 coredns-66bc5c9577-vsbrs                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m32s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m21s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m21s
	  kube-system                 csi-hostpathplugin-z7vjp                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m21s
	  kube-system                 etcd-addons-086339                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m37s
	  kube-system                 kube-apiserver-addons-086339                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m37s
	  kube-system                 kube-controller-manager-addons-086339        200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m38s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m26s
	  kube-system                 kube-proxy-7fck9                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m33s
	  kube-system                 kube-scheduler-addons-086339                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m37s
	  kube-system                 snapshot-controller-7d9fbc56b8-4kwxj         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m22s
	  kube-system                 snapshot-controller-7d9fbc56b8-wzgp7         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m22s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m29s  kube-proxy       
	  Normal  Starting                 8m38s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m37s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m37s  kubelet          Node addons-086339 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m37s  kubelet          Node addons-086339 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m37s  kubelet          Node addons-086339 status is now: NodeHasSufficientPID
	  Normal  NodeReady                8m37s  kubelet          Node addons-086339 status is now: NodeReady
	  Normal  RegisteredNode           8m33s  node-controller  Node addons-086339 event: Registered Node addons-086339 in Controller
	
	
	==> dmesg <==
	[  +0.136702] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.026933] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.422693] kauditd_printk_skb: 282 callbacks suppressed
	[  +0.000178] kauditd_printk_skb: 179 callbacks suppressed
	[Nov 1 09:51] kauditd_printk_skb: 480 callbacks suppressed
	[ +10.588247] kauditd_printk_skb: 85 callbacks suppressed
	[  +8.893680] kauditd_printk_skb: 32 callbacks suppressed
	[  +4.164899] kauditd_printk_skb: 11 callbacks suppressed
	[ +11.079506] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.550370] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.067618] kauditd_printk_skb: 131 callbacks suppressed
	[  +2.164833] kauditd_printk_skb: 126 callbacks suppressed
	[Nov 1 09:52] kauditd_printk_skb: 130 callbacks suppressed
	[  +6.663248] kauditd_printk_skb: 68 callbacks suppressed
	[  +6.258025] kauditd_printk_skb: 26 callbacks suppressed
	[  +0.000041] kauditd_printk_skb: 2 callbacks suppressed
	[ +13.077918] kauditd_printk_skb: 41 callbacks suppressed
	[  +0.000038] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.048376] kauditd_printk_skb: 98 callbacks suppressed
	[  +0.000043] kauditd_printk_skb: 78 callbacks suppressed
	[Nov 1 09:53] kauditd_printk_skb: 58 callbacks suppressed
	[  +4.089930] kauditd_printk_skb: 42 callbacks suppressed
	[ +31.556122] kauditd_printk_skb: 74 callbacks suppressed
	[Nov 1 09:54] kauditd_printk_skb: 80 callbacks suppressed
	[ +15.872282] kauditd_printk_skb: 22 callbacks suppressed
	
	
	==> etcd [195a44f107dbd39942c739e79c57d0cb5365ba4acc9d6617ae02ff7fabb66667] <==
	{"level":"warn","ts":"2025-11-01T09:51:53.828267Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"276.604194ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:51:53.828291Z","caller":"traceutil/trace.go:172","msg":"trace[1920018772] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1053; }","duration":"276.642708ms","start":"2025-11-01T09:51:53.551641Z","end":"2025-11-01T09:51:53.828284Z","steps":["trace[1920018772] 'agreement among raft nodes before linearized reading'  (duration: 276.575926ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:51:53.828365Z","caller":"traceutil/trace.go:172","msg":"trace[1601158234] transaction","detail":"{read_only:false; response_revision:1054; number_of_response:1; }","duration":"307.834445ms","start":"2025-11-01T09:51:53.520519Z","end":"2025-11-01T09:51:53.828354Z","steps":["trace[1601158234] 'process raft request'  (duration: 307.722523ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:51:53.829077Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T09:51:53.520485Z","time spent":"307.914654ms","remote":"127.0.0.1:50442","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4224,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/jobs/ingress-nginx/ingress-nginx-admission-patch\" mod_revision:715 > success:<request_put:<key:\"/registry/jobs/ingress-nginx/ingress-nginx-admission-patch\" value_size:4158 >> failure:<request_range:<key:\"/registry/jobs/ingress-nginx/ingress-nginx-admission-patch\" > >"}
	{"level":"warn","ts":"2025-11-01T09:51:53.837101Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"205.85086ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:51:53.837158Z","caller":"traceutil/trace.go:172","msg":"trace[1726047932] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1054; }","duration":"205.918617ms","start":"2025-11-01T09:51:53.631230Z","end":"2025-11-01T09:51:53.837149Z","steps":["trace[1726047932] 'agreement among raft nodes before linearized reading'  (duration: 205.832252ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:51:53.837332Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"160.114488ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:51:53.837352Z","caller":"traceutil/trace.go:172","msg":"trace[1767754287] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1054; }","duration":"160.137708ms","start":"2025-11-01T09:51:53.677208Z","end":"2025-11-01T09:51:53.837346Z","steps":["trace[1767754287] 'agreement among raft nodes before linearized reading'  (duration: 160.097095ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:51:53.837427Z","caller":"traceutil/trace.go:172","msg":"trace[169582400] transaction","detail":"{read_only:false; response_revision:1055; number_of_response:1; }","duration":"313.012286ms","start":"2025-11-01T09:51:53.524403Z","end":"2025-11-01T09:51:53.837415Z","steps":["trace[169582400] 'process raft request'  (duration: 312.936714ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:51:53.837521Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T09:51:53.524385Z","time spent":"313.094727ms","remote":"127.0.0.1:50348","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4615,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-dw6sn\" mod_revision:1047 > success:<request_put:<key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-dw6sn\" value_size:4543 >> failure:<request_range:<key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-dw6sn\" > >"}
	{"level":"warn","ts":"2025-11-01T09:51:53.837540Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"187.263588ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:51:53.837560Z","caller":"traceutil/trace.go:172","msg":"trace[1222634] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1055; }","duration":"187.33ms","start":"2025-11-01T09:51:53.650224Z","end":"2025-11-01T09:51:53.837554Z","steps":["trace[1222634] 'agreement among raft nodes before linearized reading'  (duration: 187.245695ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:51:57.997674Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"105.945423ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:51:57.998286Z","caller":"traceutil/trace.go:172","msg":"trace[902941296] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1089; }","duration":"106.560193ms","start":"2025-11-01T09:51:57.891708Z","end":"2025-11-01T09:51:57.998268Z","steps":["trace[902941296] 'range keys from in-memory index tree'  (duration: 105.862666ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:52:04.319796Z","caller":"traceutil/trace.go:172","msg":"trace[427956117] transaction","detail":"{read_only:false; response_revision:1125; number_of_response:1; }","duration":"140.175418ms","start":"2025-11-01T09:52:04.179583Z","end":"2025-11-01T09:52:04.319759Z","steps":["trace[427956117] 'process raft request'  (duration: 140.063245ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:52:08.551381Z","caller":"traceutil/trace.go:172","msg":"trace[603420838] transaction","detail":"{read_only:false; response_revision:1143; number_of_response:1; }","duration":"197.437726ms","start":"2025-11-01T09:52:08.353928Z","end":"2025-11-01T09:52:08.551366Z","steps":["trace[603420838] 'process raft request'  (duration: 197.339599ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:52:12.328289Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"151.65917ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:52:12.328359Z","caller":"traceutil/trace.go:172","msg":"trace[1819451364] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1156; }","duration":"151.738106ms","start":"2025-11-01T09:52:12.176611Z","end":"2025-11-01T09:52:12.328349Z","steps":["trace[1819451364] 'range keys from in-memory index tree'  (duration: 151.603213ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:52:19.593365Z","caller":"traceutil/trace.go:172","msg":"trace[1734006161] transaction","detail":"{read_only:false; response_revision:1195; number_of_response:1; }","duration":"230.197039ms","start":"2025-11-01T09:52:19.363155Z","end":"2025-11-01T09:52:19.593352Z","steps":["trace[1734006161] 'process raft request'  (duration: 230.054763ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:53:03.073159Z","caller":"traceutil/trace.go:172","msg":"trace[844100605] linearizableReadLoop","detail":"{readStateIndex:1471; appliedIndex:1471; }","duration":"184.287063ms","start":"2025-11-01T09:53:02.888805Z","end":"2025-11-01T09:53:03.073092Z","steps":["trace[844100605] 'read index received'  (duration: 184.274805ms)","trace[844100605] 'applied index is now lower than readState.Index'  (duration: 11.185µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T09:53:03.073336Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"184.514416ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:53:03.073356Z","caller":"traceutil/trace.go:172","msg":"trace[379602539] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1424; }","duration":"184.548883ms","start":"2025-11-01T09:53:02.888802Z","end":"2025-11-01T09:53:03.073351Z","steps":["trace[379602539] 'agreement among raft nodes before linearized reading'  (duration: 184.47499ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:53:03.073440Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"173.732425ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2025-11-01T09:53:03.073464Z","caller":"traceutil/trace.go:172","msg":"trace[1841159583] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1424; }","duration":"173.762443ms","start":"2025-11-01T09:53:02.899696Z","end":"2025-11-01T09:53:03.073458Z","steps":["trace[1841159583] 'agreement among raft nodes before linearized reading'  (duration: 173.676648ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:53:03.073212Z","caller":"traceutil/trace.go:172","msg":"trace[990398784] transaction","detail":"{read_only:false; response_revision:1424; number_of_response:1; }","duration":"298.156963ms","start":"2025-11-01T09:53:02.775044Z","end":"2025-11-01T09:53:03.073201Z","steps":["trace[990398784] 'process raft request'  (duration: 298.073448ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:59:17 up 9 min,  0 users,  load average: 0.19, 0.68, 0.55
	Linux addons-086339 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Oct 28 16:58:05 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [e1c9ad62c824f0689e272e9d02d572351e59f9325259ac81e62d0597c48762a5] <==
	I1101 09:50:55.476490       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1101 09:50:55.994199       1 alloc.go:328] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.109.112.13"}
	I1101 09:50:56.035477       1 controller.go:667] quota admission added evaluator for: statefulsets.apps
	I1101 09:50:56.324487       1 alloc.go:328] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.96.83.31"}
	W1101 09:50:57.099010       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1101 09:50:57.124752       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1101 09:50:58.973540       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.103.194.15"}
	W1101 09:51:14.142521       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1101 09:51:14.159779       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1101 09:51:14.218764       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1101 09:51:14.228684       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1101 09:51:45.524276       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 09:51:45.524485       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.150.255:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.150.255:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.150.255:443: connect: connection refused" logger="UnhandledError"
	E1101 09:51:45.525272       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1101 09:51:45.526596       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.150.255:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.150.255:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.150.255:443: connect: connection refused" logger="UnhandledError"
	E1101 09:51:45.531959       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.150.255:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.150.255:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.150.255:443: connect: connection refused" logger="UnhandledError"
	I1101 09:51:45.647009       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1101 09:52:39.519537       1 conn.go:339] Error on socket receive: read tcp 192.168.39.58:8443->192.168.39.1:41530: use of closed network connection
	I1101 09:52:48.989373       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.98.57.119"}
	I1101 09:53:11.180343       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1101 09:53:11.353371       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.7.153"}
	I1101 09:53:46.542354       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [9a6a05d5c3b322ab0daa8e0142efedb8b2cd9709809a366e3b02c33252f097e2] <==
	I1101 09:50:44.141750       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 09:50:44.157085       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:50:44.158318       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 09:50:44.158339       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:50:44.158349       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 09:50:44.158354       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 09:50:44.159955       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 09:50:44.160156       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 09:50:44.160245       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 09:50:44.160320       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 09:50:44.160408       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 09:50:44.161689       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 09:50:44.167370       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	E1101 09:51:14.127299       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 09:51:14.127454       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1101 09:51:14.127514       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1101 09:51:14.185809       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1101 09:51:14.198574       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1101 09:51:14.229296       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:51:14.299732       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1101 09:51:44.237259       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 09:51:44.318152       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1101 09:52:52.843343       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	I1101 09:53:15.811359       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I1101 09:54:13.498480       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	
	
	==> kube-proxy [260edbddb00ef801ed1131f918ebc64902a7e77ccf06a4ed7c432254423d7b66] <==
	I1101 09:50:47.380388       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:50:47.481009       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:50:47.481962       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.58"]
	E1101 09:50:47.483258       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:50:47.618974       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1101 09:50:47.619028       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 09:50:47.619055       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:50:47.646432       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:50:47.648118       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:50:47.648153       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:50:47.664129       1 config.go:309] "Starting node config controller"
	I1101 09:50:47.666955       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:50:47.666969       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:50:47.665033       1 config.go:200] "Starting service config controller"
	I1101 09:50:47.666978       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:50:47.667949       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:50:47.667987       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:50:47.668010       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:50:47.668021       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:50:47.767136       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:50:47.771739       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 09:50:47.772010       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [86586375e770d45d0d93bbb47a93539a88b9dc7cdd8db120d1c9301cf9724986] <==
	E1101 09:50:37.221936       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:50:37.222056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:50:37.222116       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:50:37.222130       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:50:37.225229       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:50:37.225317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:50:37.225378       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:50:37.227418       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:50:37.227443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:50:37.227647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 09:50:37.227768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 09:50:37.227996       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:50:38.054220       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:50:38.064603       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:50:38.082458       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:50:38.180400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:50:38.210958       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:50:38.220410       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:50:38.222634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:50:38.324209       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:50:38.347306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:50:38.391541       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:50:38.445129       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:50:38.559973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1101 09:50:41.263288       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:58:20 addons-086339 kubelet[1515]: E1101 09:58:20.349345    1515 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761991100348952218  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
	Nov 01 09:58:20 addons-086339 kubelet[1515]: E1101 09:58:20.349370    1515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761991100348952218  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
	Nov 01 09:58:28 addons-086339 kubelet[1515]: E1101 09:58:28.179419    1515 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from image index: reading manifest sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Nov 01 09:58:28 addons-086339 kubelet[1515]: E1101 09:58:28.179474    1515 kuberuntime_image.go:43] "Failed to pull image" err="fetching target platform image selected from image index: reading manifest sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Nov 01 09:58:28 addons-086339 kubelet[1515]: E1101 09:58:28.180405    1515 kuberuntime_manager.go:1449] "Unhandled Error" err="container task-pv-container start failed in pod task-pv-pod_default(eb0ec6cf-d05a-4514-92a8-21a6ef18f433): ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 01 09:58:28 addons-086339 kubelet[1515]: E1101 09:58:28.180506    1515 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"fetching target platform image selected from image index: reading manifest sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="eb0ec6cf-d05a-4514-92a8-21a6ef18f433"
	Nov 01 09:58:30 addons-086339 kubelet[1515]: E1101 09:58:30.353173    1515 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761991110352540207  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
	Nov 01 09:58:30 addons-086339 kubelet[1515]: E1101 09:58:30.353219    1515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761991110352540207  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
	Nov 01 09:58:37 addons-086339 kubelet[1515]: I1101 09:58:37.026164    1515 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-lr4lw" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 09:58:40 addons-086339 kubelet[1515]: E1101 09:58:40.028049    1515 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="eb0ec6cf-d05a-4514-92a8-21a6ef18f433"
	Nov 01 09:58:40 addons-086339 kubelet[1515]: E1101 09:58:40.355885    1515 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761991120355261994  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
	Nov 01 09:58:40 addons-086339 kubelet[1515]: E1101 09:58:40.355918    1515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761991120355261994  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
	Nov 01 09:58:50 addons-086339 kubelet[1515]: E1101 09:58:50.361518    1515 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761991130361079990  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
	Nov 01 09:58:50 addons-086339 kubelet[1515]: E1101 09:58:50.361780    1515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761991130361079990  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
	Nov 01 09:58:52 addons-086339 kubelet[1515]: E1101 09:58:52.028132    1515 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="eb0ec6cf-d05a-4514-92a8-21a6ef18f433"
	Nov 01 09:58:59 addons-086339 kubelet[1515]: E1101 09:58:59.592813    1515 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="busybox:stable"
	Nov 01 09:58:59 addons-086339 kubelet[1515]: E1101 09:58:59.592917    1515 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="busybox:stable"
	Nov 01 09:58:59 addons-086339 kubelet[1515]: E1101 09:58:59.593136    1515 kuberuntime_manager.go:1449] "Unhandled Error" err="container busybox start failed in pod test-local-path_default(bb9a245d-f766-4ca6-8de9-96b056a9cab4): ErrImagePull: reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 01 09:58:59 addons-086339 kubelet[1515]: E1101 09:58:59.593172    1515 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ErrImagePull: \"reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="bb9a245d-f766-4ca6-8de9-96b056a9cab4"
	Nov 01 09:59:00 addons-086339 kubelet[1515]: E1101 09:59:00.365230    1515 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761991140364758565  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
	Nov 01 09:59:00 addons-086339 kubelet[1515]: E1101 09:59:00.365280    1515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761991140364758565  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
	Nov 01 09:59:03 addons-086339 kubelet[1515]: E1101 09:59:03.027139    1515 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: fetching target platform image selected from image index: reading manifest sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="eb0ec6cf-d05a-4514-92a8-21a6ef18f433"
	Nov 01 09:59:10 addons-086339 kubelet[1515]: E1101 09:59:10.368424    1515 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761991150367736074  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
	Nov 01 09:59:10 addons-086339 kubelet[1515]: E1101 09:59:10.368976    1515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761991150367736074  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
	Nov 01 09:59:13 addons-086339 kubelet[1515]: E1101 09:59:13.028585    1515 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="bb9a245d-f766-4ca6-8de9-96b056a9cab4"
	
	
	==> storage-provisioner [6de230bb7ebf7771aad2c97275ddf43f297877d5aa072670a0a1ea8eb9c2d60f] <==
	W1101 09:58:53.057946       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:58:55.061689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:58:55.066921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:58:57.071062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:58:57.077096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:58:59.081086       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:58:59.088927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:01.093352       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:01.112921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:03.116482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:03.123247       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:05.127247       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:05.135234       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:07.138933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:07.145361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:09.149157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:09.157182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:11.161581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:11.169979       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:13.173945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:13.182062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:15.186134       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:15.191105       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:17.195576       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:59:17.204338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-086339 -n addons-086339
helpers_test.go:269: (dbg) Run:  kubectl --context addons-086339 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-d7qkm ingress-nginx-admission-patch-dw6sn
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-086339 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-d7qkm ingress-nginx-admission-patch-dw6sn
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-086339 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-d7qkm ingress-nginx-admission-patch-dw6sn: exit status 1 (89.574578ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-086339/192.168.39.58
	Start Time:       Sat, 01 Nov 2025 09:53:11 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sggwf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-sggwf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m7s                 default-scheduler  Successfully assigned default/nginx to addons-086339
	  Warning  Failed     3m11s                kubelet            Failed to pull image "docker.io/nginx:alpine": fetching target platform image selected from image index: reading manifest sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     96s (x2 over 5m)     kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     96s (x3 over 5m)     kubelet            Error: ErrImagePull
	  Normal   BackOff    60s (x5 over 4m59s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     60s (x5 over 4m59s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    45s (x4 over 6m7s)   kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-086339/192.168.39.58
	Start Time:       Sat, 01 Nov 2025 09:53:15 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.30
	IPs:
	  IP:  10.244.0.30
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x27kl (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-x27kl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m3s                   default-scheduler  Successfully assigned default/task-pv-pod to addons-086339
	  Warning  Failed     2m39s (x2 over 4m28s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     50s (x3 over 4m28s)    kubelet            Error: ErrImagePull
	  Warning  Failed     50s                    kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    15s (x5 over 4m28s)    kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     15s (x5 over 4m28s)    kubelet            Error: ImagePullBackOff
	  Normal   Pulling    4s (x4 over 6m2s)      kubelet            Pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-086339/192.168.39.58
	Start Time:       Sat, 01 Nov 2025 09:52:55 +0000
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         busybox:stable
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t5c9x (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-t5c9x:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m23s                default-scheduler  Successfully assigned default/test-local-path to addons-086339
	  Warning  Failed     5m34s                kubelet            Failed to pull image "busybox:stable": fetching target platform image selected from image index: reading manifest sha256:00baf5736376036ea4bc1a1c075784fc98a79186604d5d41305cd9b428b3b737 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    74s (x4 over 6m20s)  kubelet            Pulling image "busybox:stable"
	  Warning  Failed     19s (x4 over 5m34s)  kubelet            Error: ErrImagePull
	  Warning  Failed     19s (x3 over 3m57s)  kubelet            Failed to pull image "busybox:stable": reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    5s (x6 over 5m34s)   kubelet            Back-off pulling image "busybox:stable"
	  Warning  Failed     5s (x6 over 5m34s)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-d7qkm" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-dw6sn" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-086339 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-d7qkm ingress-nginx-admission-patch-dw6sn: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-086339 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-086339 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-086339 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.002986596s)
--- FAIL: TestAddons/parallel/CSI (379.69s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (232.83s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-086339 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-086339 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-086339 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-086339 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-086339 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-086339 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-086339 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-086339 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-086339 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-086339 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [bb9a245d-f766-4ca6-8de9-96b056a9cab4] Pending
helpers_test.go:352: "test-local-path" [bb9a245d-f766-4ca6-8de9-96b056a9cab4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:337: TestAddons/parallel/LocalPath: WARNING: pod list for "default" "run=test-local-path" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:962: ***** TestAddons/parallel/LocalPath: pod "run=test-local-path" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:962: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-086339 -n addons-086339
addons_test.go:962: TestAddons/parallel/LocalPath: showing logs for failed pods as of 2025-11-01 09:55:55.575314034 +0000 UTC m=+370.317522011
addons_test.go:962: (dbg) Run:  kubectl --context addons-086339 describe po test-local-path -n default
addons_test.go:962: (dbg) kubectl --context addons-086339 describe po test-local-path -n default:
Name:             test-local-path
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-086339/192.168.39.58
Start Time:       Sat, 01 Nov 2025 09:52:55 +0000
Labels:           run=test-local-path
Annotations:      <none>
Status:           Pending
IP:               10.244.0.27
IPs:
IP:  10.244.0.27
Containers:
busybox:
Container ID:  
Image:         busybox:stable
Image ID:      
Port:          <none>
Host Port:     <none>
Command:
sh
-c
echo 'local-path-provisioner' > /test/file1
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/test from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t5c9x (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
data:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  test-pvc
ReadOnly:   false
kube-api-access-t5c9x:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  3m                   default-scheduler  Successfully assigned default/test-local-path to addons-086339
Warning  Failed     2m11s                kubelet            Failed to pull image "busybox:stable": fetching target platform image selected from image index: reading manifest sha256:00baf5736376036ea4bc1a1c075784fc98a79186604d5d41305cd9b428b3b737 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     34s (x2 over 2m11s)  kubelet            Error: ErrImagePull
Warning  Failed     34s                  kubelet            Failed to pull image "busybox:stable": reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    18s (x2 over 2m11s)  kubelet            Back-off pulling image "busybox:stable"
Warning  Failed     18s (x2 over 2m11s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    5s (x3 over 2m57s)   kubelet            Pulling image "busybox:stable"
addons_test.go:962: (dbg) Run:  kubectl --context addons-086339 logs test-local-path -n default
addons_test.go:962: (dbg) Non-zero exit: kubectl --context addons-086339 logs test-local-path -n default: exit status 1 (72.255647ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "busybox" in pod "test-local-path" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:962: kubectl --context addons-086339 logs test-local-path -n default: exit status 1
addons_test.go:963: failed waiting for test-local-path pod: run=test-local-path within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/LocalPath]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-086339 -n addons-086339
helpers_test.go:252: <<< TestAddons/parallel/LocalPath FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/LocalPath]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-086339 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-086339 logs -n 25: (1.330027582s)
helpers_test.go:260: TestAddons/parallel/LocalPath logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ minikube             │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ delete  │ -p download-only-319914                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-319914 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ start   │ -o=json --download-only -p download-only-036288 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-036288 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ minikube             │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ delete  │ -p download-only-036288                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-036288 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ delete  │ -p download-only-319914                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-319914 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ delete  │ -p download-only-036288                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-036288 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ start   │ --download-only -p binary-mirror-623089 --alsologtostderr --binary-mirror http://127.0.0.1:33603 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-623089 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │                     │
	│ delete  │ -p binary-mirror-623089                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-623089 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ addons  │ enable dashboard -p addons-086339                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │                     │
	│ addons  │ disable dashboard -p addons-086339                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │                     │
	│ start   │ -p addons-086339 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:52 UTC │
	│ addons  │ addons-086339 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:52 UTC │ 01 Nov 25 09:52 UTC │
	│ addons  │ addons-086339 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:52 UTC │ 01 Nov 25 09:52 UTC │
	│ addons  │ enable headlamp -p addons-086339 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:52 UTC │ 01 Nov 25 09:52 UTC │
	│ addons  │ addons-086339 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:52 UTC │ 01 Nov 25 09:52 UTC │
	│ addons  │ addons-086339 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:52 UTC │ 01 Nov 25 09:53 UTC │
	│ addons  │ addons-086339 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:53 UTC │ 01 Nov 25 09:53 UTC │
	│ addons  │ addons-086339 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:53 UTC │ 01 Nov 25 09:53 UTC │
	│ ip      │ addons-086339 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:53 UTC │ 01 Nov 25 09:53 UTC │
	│ addons  │ addons-086339 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:53 UTC │ 01 Nov 25 09:53 UTC │
	│ addons  │ addons-086339 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:53 UTC │ 01 Nov 25 09:53 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-086339                                                                                                                                                                                                                                                                                                                                                                                         │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:53 UTC │ 01 Nov 25 09:53 UTC │
	│ addons  │ addons-086339 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:53 UTC │ 01 Nov 25 09:53 UTC │
	│ addons  │ addons-086339 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-086339        │ jenkins │ v1.37.0 │ 01 Nov 25 09:54 UTC │ 01 Nov 25 09:54 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:49:57
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:49:57.488461   74584 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:49:57.488721   74584 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:49:57.488731   74584 out.go:374] Setting ErrFile to fd 2...
	I1101 09:49:57.488735   74584 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:49:57.488932   74584 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-70113/.minikube/bin
	I1101 09:49:57.489456   74584 out.go:368] Setting JSON to false
	I1101 09:49:57.490315   74584 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":5545,"bootTime":1761985052,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:49:57.490405   74584 start.go:143] virtualization: kvm guest
	I1101 09:49:57.492349   74584 out.go:179] * [addons-086339] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:49:57.493732   74584 notify.go:221] Checking for updates...
	I1101 09:49:57.493769   74584 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 09:49:57.495124   74584 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:49:57.496430   74584 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-70113/kubeconfig
	I1101 09:49:57.497763   74584 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-70113/.minikube
	I1101 09:49:57.499098   74584 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:49:57.500291   74584 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:49:57.501672   74584 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:49:57.530798   74584 out.go:179] * Using the kvm2 driver based on user configuration
	I1101 09:49:57.531916   74584 start.go:309] selected driver: kvm2
	I1101 09:49:57.531929   74584 start.go:930] validating driver "kvm2" against <nil>
	I1101 09:49:57.531940   74584 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:49:57.532704   74584 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 09:49:57.532950   74584 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:49:57.532995   74584 cni.go:84] Creating CNI manager for ""
	I1101 09:49:57.533055   74584 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 09:49:57.533066   74584 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1101 09:49:57.533123   74584 start.go:353] cluster config:
	{Name:addons-086339 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-086339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1101 09:49:57.533236   74584 iso.go:125] acquiring lock: {Name:mk49d9a272bb99d336f82dfc5631a4c8ce9271c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:49:57.534643   74584 out.go:179] * Starting "addons-086339" primary control-plane node in "addons-086339" cluster
	I1101 09:49:57.535623   74584 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:49:57.535667   74584 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-70113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 09:49:57.535680   74584 cache.go:59] Caching tarball of preloaded images
	I1101 09:49:57.535759   74584 preload.go:233] Found /home/jenkins/minikube-integration/21830-70113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 09:49:57.535771   74584 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:49:57.536122   74584 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/config.json ...
	I1101 09:49:57.536151   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/config.json: {Name:mka52b297897069cd677da03eb710fe0f89e4afc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:49:57.536283   74584 start.go:360] acquireMachinesLock for addons-086339: {Name:mk53a05d125fe91ead2a39c6bbf2ba926c471e2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 09:49:57.536359   74584 start.go:364] duration metric: took 60.989µs to acquireMachinesLock for "addons-086339"
	I1101 09:49:57.536383   74584 start.go:93] Provisioning new machine with config: &{Name:addons-086339 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-086339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:49:57.536443   74584 start.go:125] createHost starting for "" (driver="kvm2")
	I1101 09:49:57.537962   74584 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1101 09:49:57.538116   74584 start.go:159] libmachine.API.Create for "addons-086339" (driver="kvm2")
	I1101 09:49:57.538147   74584 client.go:173] LocalClient.Create starting
	I1101 09:49:57.538241   74584 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem
	I1101 09:49:57.899320   74584 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/cert.pem
	I1101 09:49:58.572079   74584 main.go:143] libmachine: creating domain...
	I1101 09:49:58.572106   74584 main.go:143] libmachine: creating network...
	I1101 09:49:58.573844   74584 main.go:143] libmachine: found existing default network
	I1101 09:49:58.574184   74584 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 09:49:58.574920   74584 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c7bfb0}
	I1101 09:49:58.575053   74584 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-086339</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 09:49:58.580872   74584 main.go:143] libmachine: creating private network mk-addons-086339 192.168.39.0/24...
	I1101 09:49:58.651337   74584 main.go:143] libmachine: private network mk-addons-086339 192.168.39.0/24 created
	I1101 09:49:58.651625   74584 main.go:143] libmachine: <network>
	  <name>mk-addons-086339</name>
	  <uuid>3e8e4cbf-1e3f-4b76-b08f-c763f9bae7dc</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:4f:55:bf'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 09:49:58.651651   74584 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339 ...
	I1101 09:49:58.651674   74584 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21830-70113/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso
	I1101 09:49:58.651685   74584 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21830-70113/.minikube
	I1101 09:49:58.651769   74584 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21830-70113/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21830-70113/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso...
	I1101 09:49:58.889523   74584 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa...
	I1101 09:49:59.320606   74584 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/addons-086339.rawdisk...
	I1101 09:49:59.320670   74584 main.go:143] libmachine: Writing magic tar header
	I1101 09:49:59.320695   74584 main.go:143] libmachine: Writing SSH key tar header
	I1101 09:49:59.320769   74584 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339 ...
	I1101 09:49:59.320832   74584 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339
	I1101 09:49:59.320855   74584 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339 (perms=drwx------)
	I1101 09:49:59.320865   74584 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21830-70113/.minikube/machines
	I1101 09:49:59.320880   74584 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21830-70113/.minikube/machines (perms=drwxr-xr-x)
	I1101 09:49:59.320892   74584 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21830-70113/.minikube
	I1101 09:49:59.320902   74584 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21830-70113/.minikube (perms=drwxr-xr-x)
	I1101 09:49:59.320910   74584 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21830-70113
	I1101 09:49:59.320919   74584 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21830-70113 (perms=drwxrwxr-x)
	I1101 09:49:59.320926   74584 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1101 09:49:59.320936   74584 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1101 09:49:59.320946   74584 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1101 09:49:59.320953   74584 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1101 09:49:59.320964   74584 main.go:143] libmachine: checking permissions on dir: /home
	I1101 09:49:59.320971   74584 main.go:143] libmachine: skipping /home - not owner
	I1101 09:49:59.320977   74584 main.go:143] libmachine: defining domain...
	I1101 09:49:59.322386   74584 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-086339</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/addons-086339.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-086339'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1101 09:49:59.327390   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:41:14:53 in network default
	I1101 09:49:59.328042   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:49:59.328057   74584 main.go:143] libmachine: starting domain...
	I1101 09:49:59.328062   74584 main.go:143] libmachine: ensuring networks are active...
	I1101 09:49:59.328857   74584 main.go:143] libmachine: Ensuring network default is active
	I1101 09:49:59.329422   74584 main.go:143] libmachine: Ensuring network mk-addons-086339 is active
	I1101 09:49:59.330127   74584 main.go:143] libmachine: getting domain XML...
	I1101 09:49:59.331370   74584 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-086339</name>
	  <uuid>a0be334a-213a-4e9a-bad3-6168cb6c4d93</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/addons-086339.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:b9:a4:85'/>
	      <source network='mk-addons-086339'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:41:14:53'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1101 09:50:00.609088   74584 main.go:143] libmachine: waiting for domain to start...
	I1101 09:50:00.610434   74584 main.go:143] libmachine: domain is now running
	I1101 09:50:00.610456   74584 main.go:143] libmachine: waiting for IP...
	I1101 09:50:00.611312   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:00.612106   74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
	I1101 09:50:00.612125   74584 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:50:00.612466   74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
	I1101 09:50:00.612543   74584 retry.go:31] will retry after 238.184391ms: waiting for domain to come up
	I1101 09:50:00.851957   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:00.852980   74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
	I1101 09:50:00.852999   74584 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:50:00.853378   74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
	I1101 09:50:00.853417   74584 retry.go:31] will retry after 315.459021ms: waiting for domain to come up
	I1101 09:50:01.170821   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:01.171618   74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
	I1101 09:50:01.171637   74584 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:50:01.172000   74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
	I1101 09:50:01.172045   74584 retry.go:31] will retry after 375.800667ms: waiting for domain to come up
	I1101 09:50:01.549768   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:01.550551   74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
	I1101 09:50:01.550568   74584 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:50:01.550912   74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
	I1101 09:50:01.550947   74584 retry.go:31] will retry after 436.650242ms: waiting for domain to come up
	I1101 09:50:01.989558   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:01.990329   74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
	I1101 09:50:01.990346   74584 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:50:01.990674   74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
	I1101 09:50:01.990717   74584 retry.go:31] will retry after 579.834412ms: waiting for domain to come up
	I1101 09:50:02.572692   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:02.573467   74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
	I1101 09:50:02.573488   74584 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:50:02.573815   74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
	I1101 09:50:02.573865   74584 retry.go:31] will retry after 839.063755ms: waiting for domain to come up
	I1101 09:50:03.414428   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:03.415319   74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
	I1101 09:50:03.415342   74584 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:50:03.415659   74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
	I1101 09:50:03.415702   74584 retry.go:31] will retry after 768.970672ms: waiting for domain to come up
	I1101 09:50:04.186700   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:04.187419   74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
	I1101 09:50:04.187437   74584 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:50:04.187709   74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
	I1101 09:50:04.187746   74584 retry.go:31] will retry after 1.192575866s: waiting for domain to come up
	I1101 09:50:05.382202   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:05.382884   74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
	I1101 09:50:05.382907   74584 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:50:05.383270   74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
	I1101 09:50:05.383321   74584 retry.go:31] will retry after 1.520355221s: waiting for domain to come up
	I1101 09:50:06.906019   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:06.906685   74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
	I1101 09:50:06.906702   74584 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:50:06.906966   74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
	I1101 09:50:06.907000   74584 retry.go:31] will retry after 1.452783326s: waiting for domain to come up
	I1101 09:50:08.361823   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:08.362686   74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
	I1101 09:50:08.362711   74584 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:50:08.363062   74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
	I1101 09:50:08.363109   74584 retry.go:31] will retry after 1.991395227s: waiting for domain to come up
	I1101 09:50:10.357523   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:10.358353   74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
	I1101 09:50:10.358372   74584 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:50:10.358693   74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
	I1101 09:50:10.358739   74584 retry.go:31] will retry after 3.532288823s: waiting for domain to come up
	I1101 09:50:13.893052   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:13.893671   74584 main.go:143] libmachine: no network interface addresses found for domain addons-086339 (source=lease)
	I1101 09:50:13.893684   74584 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:50:13.893975   74584 main.go:143] libmachine: unable to find current IP address of domain addons-086339 in network mk-addons-086339 (interfaces detected: [])
	I1101 09:50:13.894012   74584 retry.go:31] will retry after 4.252229089s: waiting for domain to come up
	I1101 09:50:18.147616   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.148327   74584 main.go:143] libmachine: domain addons-086339 has current primary IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.148350   74584 main.go:143] libmachine: found domain IP: 192.168.39.58
	I1101 09:50:18.148365   74584 main.go:143] libmachine: reserving static IP address...
	I1101 09:50:18.148791   74584 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-086339", mac: "52:54:00:b9:a4:85", ip: "192.168.39.58"} in network mk-addons-086339
	I1101 09:50:18.327560   74584 main.go:143] libmachine: reserved static IP address 192.168.39.58 for domain addons-086339
	I1101 09:50:18.327599   74584 main.go:143] libmachine: waiting for SSH...
	I1101 09:50:18.327609   74584 main.go:143] libmachine: Getting to WaitForSSH function...
	I1101 09:50:18.330699   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.331371   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:18.331408   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.331641   74584 main.go:143] libmachine: Using SSH client type: native
	I1101 09:50:18.331928   74584 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1101 09:50:18.331942   74584 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1101 09:50:18.444329   74584 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:50:18.444817   74584 main.go:143] libmachine: domain creation complete
	I1101 09:50:18.446547   74584 machine.go:94] provisionDockerMachine start ...
	I1101 09:50:18.449158   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.449586   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:18.449617   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.449805   74584 main.go:143] libmachine: Using SSH client type: native
	I1101 09:50:18.450004   74584 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1101 09:50:18.450014   74584 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:50:18.560574   74584 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1101 09:50:18.560609   74584 buildroot.go:166] provisioning hostname "addons-086339"
	I1101 09:50:18.564015   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.564582   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:18.564616   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.564819   74584 main.go:143] libmachine: Using SSH client type: native
	I1101 09:50:18.565060   74584 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1101 09:50:18.565073   74584 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-086339 && echo "addons-086339" | sudo tee /etc/hostname
	I1101 09:50:18.692294   74584 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-086339
	
	I1101 09:50:18.695361   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.695730   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:18.695754   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.695958   74584 main.go:143] libmachine: Using SSH client type: native
	I1101 09:50:18.696217   74584 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1101 09:50:18.696238   74584 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-086339' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-086339/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-086339' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:50:18.817833   74584 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:50:18.817861   74584 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21830-70113/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-70113/.minikube}
	I1101 09:50:18.817917   74584 buildroot.go:174] setting up certificates
	I1101 09:50:18.817929   74584 provision.go:84] configureAuth start
	I1101 09:50:18.820836   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.821182   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:18.821205   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.823468   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.823880   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:18.823917   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.824065   74584 provision.go:143] copyHostCerts
	I1101 09:50:18.824126   74584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-70113/.minikube/ca.pem (1082 bytes)
	I1101 09:50:18.824236   74584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-70113/.minikube/cert.pem (1123 bytes)
	I1101 09:50:18.824293   74584 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-70113/.minikube/key.pem (1675 bytes)
	I1101 09:50:18.824393   74584 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-70113/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca-key.pem org=jenkins.addons-086339 san=[127.0.0.1 192.168.39.58 addons-086339 localhost minikube]
	I1101 09:50:18.982158   74584 provision.go:177] copyRemoteCerts
	I1101 09:50:18.982222   74584 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:50:18.984649   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.985018   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:18.985044   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:18.985191   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:19.074666   74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 09:50:19.105450   74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1101 09:50:19.136079   74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:50:19.165744   74584 provision.go:87] duration metric: took 347.798818ms to configureAuth
	I1101 09:50:19.165785   74584 buildroot.go:189] setting minikube options for container-runtime
	I1101 09:50:19.165985   74584 config.go:182] Loaded profile config "addons-086339": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:50:19.168523   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.169168   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:19.169200   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.169383   74584 main.go:143] libmachine: Using SSH client type: native
	I1101 09:50:19.169583   74584 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1101 09:50:19.169597   74584 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:50:19.428804   74584 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:50:19.428828   74584 machine.go:97] duration metric: took 982.268013ms to provisionDockerMachine
	I1101 09:50:19.428839   74584 client.go:176] duration metric: took 21.890685225s to LocalClient.Create
	I1101 09:50:19.428858   74584 start.go:167] duration metric: took 21.89074228s to libmachine.API.Create "addons-086339"
	I1101 09:50:19.428865   74584 start.go:293] postStartSetup for "addons-086339" (driver="kvm2")
	I1101 09:50:19.428874   74584 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:50:19.428936   74584 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:50:19.431801   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.432251   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:19.432273   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.432405   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:19.520001   74584 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:50:19.525231   74584 info.go:137] Remote host: Buildroot 2025.02
	I1101 09:50:19.525259   74584 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-70113/.minikube/addons for local assets ...
	I1101 09:50:19.525321   74584 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-70113/.minikube/files for local assets ...
	I1101 09:50:19.525345   74584 start.go:296] duration metric: took 96.474195ms for postStartSetup
	I1101 09:50:19.528299   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.528696   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:19.528717   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.528916   74584 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/config.json ...
	I1101 09:50:19.529095   74584 start.go:128] duration metric: took 21.992639315s to createHost
	I1101 09:50:19.531331   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.531699   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:19.531722   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.531876   74584 main.go:143] libmachine: Using SSH client type: native
	I1101 09:50:19.532065   74584 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.58 22 <nil> <nil>}
	I1101 09:50:19.532075   74584 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1101 09:50:19.643235   74584 main.go:143] libmachine: SSH cmd err, output: <nil>: 1761990619.607534656
	
	I1101 09:50:19.643257   74584 fix.go:216] guest clock: 1761990619.607534656
	I1101 09:50:19.643268   74584 fix.go:229] Guest: 2025-11-01 09:50:19.607534656 +0000 UTC Remote: 2025-11-01 09:50:19.52910603 +0000 UTC m=+22.094671738 (delta=78.428626ms)
	I1101 09:50:19.643283   74584 fix.go:200] guest clock delta is within tolerance: 78.428626ms
	I1101 09:50:19.643288   74584 start.go:83] releasing machines lock for "addons-086339", held for 22.106918768s
	I1101 09:50:19.646471   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.646896   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:19.646926   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.647587   74584 ssh_runner.go:195] Run: cat /version.json
	I1101 09:50:19.647618   74584 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:50:19.650456   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.650903   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:19.650929   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.650937   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.651111   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:19.651498   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:19.651548   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:19.651722   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:19.732914   74584 ssh_runner.go:195] Run: systemctl --version
	I1101 09:50:19.761438   74584 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:50:19.921978   74584 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:50:19.929230   74584 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:50:19.929321   74584 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:50:19.949743   74584 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 09:50:19.949779   74584 start.go:496] detecting cgroup driver to use...
	I1101 09:50:19.949851   74584 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:50:19.969767   74584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:50:19.988383   74584 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:50:19.988445   74584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:50:20.006528   74584 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:50:20.025137   74584 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:50:20.177314   74584 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:50:20.388642   74584 docker.go:234] disabling docker service ...
	I1101 09:50:20.388724   74584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:50:20.405986   74584 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:50:20.421236   74584 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:50:20.585305   74584 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:50:20.731424   74584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:50:20.748134   74584 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:50:20.778555   74584 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:50:20.778621   74584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:50:20.792483   74584 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 09:50:20.792563   74584 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:50:20.806228   74584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:50:20.819314   74584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:50:20.832971   74584 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:50:20.847580   74584 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:50:20.861416   74584 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:50:20.884021   74584 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:50:20.898082   74584 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:50:20.909995   74584 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 09:50:20.910054   74584 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 09:50:20.932503   74584 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:50:20.945456   74584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:50:21.091518   74584 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:50:21.209311   74584 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:50:21.209394   74584 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:50:21.215638   74584 start.go:564] Will wait 60s for crictl version
	I1101 09:50:21.215718   74584 ssh_runner.go:195] Run: which crictl
	I1101 09:50:21.220104   74584 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 09:50:21.265319   74584 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1101 09:50:21.265428   74584 ssh_runner.go:195] Run: crio --version
	I1101 09:50:21.296407   74584 ssh_runner.go:195] Run: crio --version
	I1101 09:50:21.330270   74584 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1101 09:50:21.333966   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:21.334360   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:21.334382   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:21.334577   74584 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1101 09:50:21.339385   74584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:50:21.355743   74584 kubeadm.go:884] updating cluster {Name:addons-086339 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-086339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:50:21.355864   74584 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:50:21.355925   74584 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:50:21.393026   74584 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1101 09:50:21.393097   74584 ssh_runner.go:195] Run: which lz4
	I1101 09:50:21.397900   74584 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1101 09:50:21.403032   74584 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 09:50:21.403064   74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1101 09:50:22.958959   74584 crio.go:462] duration metric: took 1.561103562s to copy over tarball
	I1101 09:50:22.959030   74584 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 09:50:24.646069   74584 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.687012473s)
	I1101 09:50:24.646110   74584 crio.go:469] duration metric: took 1.687120275s to extract the tarball
	I1101 09:50:24.646124   74584 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 09:50:24.689384   74584 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:50:24.745551   74584 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:50:24.745581   74584 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:50:24.745590   74584 kubeadm.go:935] updating node { 192.168.39.58 8443 v1.34.1 crio true true} ...
	I1101 09:50:24.745676   74584 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-086339 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-086339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:50:24.745742   74584 ssh_runner.go:195] Run: crio config
	I1101 09:50:24.792600   74584 cni.go:84] Creating CNI manager for ""
	I1101 09:50:24.792624   74584 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 09:50:24.792643   74584 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:50:24.792678   74584 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.58 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-086339 NodeName:addons-086339 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:50:24.792797   74584 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.58
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-086339"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.58"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.58"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:50:24.792863   74584 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:50:24.805312   74584 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:50:24.805386   74584 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:50:24.817318   74584 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1101 09:50:24.839738   74584 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:50:24.861206   74584 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1101 09:50:24.882598   74584 ssh_runner.go:195] Run: grep 192.168.39.58	control-plane.minikube.internal$ /etc/hosts
	I1101 09:50:24.887202   74584 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.58	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:50:24.903393   74584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:50:25.046563   74584 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:50:25.078339   74584 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339 for IP: 192.168.39.58
	I1101 09:50:25.078373   74584 certs.go:195] generating shared ca certs ...
	I1101 09:50:25.078393   74584 certs.go:227] acquiring lock for ca certs: {Name:mk20731b316fbc22c351241cefc40924880eeba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:50:25.078607   74584 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-70113/.minikube/ca.key
	I1101 09:50:25.370750   74584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-70113/.minikube/ca.crt ...
	I1101 09:50:25.370787   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/ca.crt: {Name:mk44e2ef3879300ef465f5e14a88e17a335203c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:50:25.370979   74584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-70113/.minikube/ca.key ...
	I1101 09:50:25.370991   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/ca.key: {Name:mk6a6a936cb10734e248a5e184dc212d0dd50fee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:50:25.371084   74584 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.key
	I1101 09:50:25.596029   74584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.crt ...
	I1101 09:50:25.596060   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.crt: {Name:mk4883ce1337edc02ddc3ac7b72fc885fc718a43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:50:25.596251   74584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.key ...
	I1101 09:50:25.596263   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.key: {Name:mk64aaf400461d117ff2d2f246459980ad32acba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:50:25.596345   74584 certs.go:257] generating profile certs ...
	I1101 09:50:25.596402   74584 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.key
	I1101 09:50:25.596427   74584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt with IP's: []
	I1101 09:50:25.837595   74584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt ...
	I1101 09:50:25.837629   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt: {Name:mk6a3c2908e98c5011b9a353eff3f73fbb200e4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:50:25.837800   74584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.key ...
	I1101 09:50:25.837814   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.key: {Name:mke495d2d15563b5194e6cade83d0c75b9212db7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:50:25.837890   74584 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.key.698c417c
	I1101 09:50:25.837920   74584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.crt.698c417c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.58]
	I1101 09:50:25.933112   74584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.crt.698c417c ...
	I1101 09:50:25.933142   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.crt.698c417c: {Name:mk0254e8775842aca5cd671155531f1ec86ec40f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:50:25.933311   74584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.key.698c417c ...
	I1101 09:50:25.933328   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.key.698c417c: {Name:mk3e1746ccfcc3989b4b0944f75fafe8929108a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:50:25.933413   74584 certs.go:382] copying /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.crt.698c417c -> /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.crt
	I1101 09:50:25.933491   74584 certs.go:386] copying /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.key.698c417c -> /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.key
	I1101 09:50:25.933552   74584 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/proxy-client.key
	I1101 09:50:25.933569   74584 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/proxy-client.crt with IP's: []
	I1101 09:50:26.270478   74584 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/proxy-client.crt ...
	I1101 09:50:26.270513   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/proxy-client.crt: {Name:mk40ee0c5f510c6df044b64c5c0ccf02f754f518 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:50:26.270707   74584 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/proxy-client.key ...
	I1101 09:50:26.270719   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/proxy-client.key: {Name:mk13d4f8cab34676a9c94f4e51f06fa6b4450e4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:50:26.270893   74584 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:50:26.270934   74584 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem (1082 bytes)
	I1101 09:50:26.270958   74584 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:50:26.270980   74584 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/key.pem (1675 bytes)
	I1101 09:50:26.271524   74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:50:26.304432   74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 09:50:26.336585   74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:50:26.370965   74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:50:26.404637   74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1101 09:50:26.438434   74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 09:50:26.470419   74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:50:26.505400   74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 09:50:26.538739   74584 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:50:26.571139   74584 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:50:26.596933   74584 ssh_runner.go:195] Run: openssl version
	I1101 09:50:26.604814   74584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:50:26.625168   74584 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:50:26.631403   74584 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:50 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:50:26.631463   74584 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:50:26.639666   74584 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:50:26.655106   74584 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:50:26.660616   74584 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 09:50:26.660681   74584 kubeadm.go:401] StartCluster: {Name:addons-086339 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-086339 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:50:26.660767   74584 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:50:26.660830   74584 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:50:26.713279   74584 cri.go:89] found id: ""
	I1101 09:50:26.713354   74584 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:50:26.732360   74584 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 09:50:26.753939   74584 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 09:50:26.768399   74584 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 09:50:26.768428   74584 kubeadm.go:158] found existing configuration files:
	
	I1101 09:50:26.768509   74584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 09:50:26.780652   74584 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 09:50:26.780726   74584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 09:50:26.792996   74584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 09:50:26.805190   74584 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 09:50:26.805252   74584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 09:50:26.817970   74584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 09:50:26.829425   74584 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 09:50:26.829521   74584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 09:50:26.842392   74584 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 09:50:26.855031   74584 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 09:50:26.855120   74584 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 09:50:26.868465   74584 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1101 09:50:27.034423   74584 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 09:50:40.596085   74584 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 09:50:40.596157   74584 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 09:50:40.596234   74584 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 09:50:40.596323   74584 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 09:50:40.596395   74584 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 09:50:40.596501   74584 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 09:50:40.598485   74584 out.go:252]   - Generating certificates and keys ...
	I1101 09:50:40.598596   74584 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 09:50:40.598677   74584 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 09:50:40.598786   74584 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 09:50:40.598884   74584 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 09:50:40.598965   74584 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 09:50:40.599020   74584 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 09:50:40.599097   74584 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 09:50:40.599235   74584 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-086339 localhost] and IPs [192.168.39.58 127.0.0.1 ::1]
	I1101 09:50:40.599294   74584 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 09:50:40.599486   74584 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-086339 localhost] and IPs [192.168.39.58 127.0.0.1 ::1]
	I1101 09:50:40.599578   74584 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 09:50:40.599671   74584 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 09:50:40.599744   74584 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 09:50:40.599837   74584 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 09:50:40.599908   74584 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 09:50:40.599990   74584 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 09:50:40.600070   74584 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 09:50:40.600159   74584 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 09:50:40.600236   74584 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 09:50:40.600342   74584 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 09:50:40.600430   74584 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 09:50:40.601841   74584 out.go:252]   - Booting up control plane ...
	I1101 09:50:40.601953   74584 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 09:50:40.602064   74584 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 09:50:40.602160   74584 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 09:50:40.602298   74584 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 09:50:40.602458   74584 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 09:50:40.602614   74584 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 09:50:40.602706   74584 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 09:50:40.602764   74584 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 09:50:40.602925   74584 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 09:50:40.603084   74584 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 09:50:40.603174   74584 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002004831s
	I1101 09:50:40.603300   74584 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 09:50:40.603404   74584 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.58:8443/livez
	I1101 09:50:40.603516   74584 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 09:50:40.603630   74584 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 09:50:40.603719   74584 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.147708519s
	I1101 09:50:40.603845   74584 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.505964182s
	I1101 09:50:40.603957   74584 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.503174092s
	I1101 09:50:40.604099   74584 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 09:50:40.604336   74584 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 09:50:40.604410   74584 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 09:50:40.604590   74584 kubeadm.go:319] [mark-control-plane] Marking the node addons-086339 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 09:50:40.604649   74584 kubeadm.go:319] [bootstrap-token] Using token: n6ooj1.g2r52lt9s64k7lzx
	I1101 09:50:40.606300   74584 out.go:252]   - Configuring RBAC rules ...
	I1101 09:50:40.606413   74584 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 09:50:40.606488   74584 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 09:50:40.606682   74584 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 09:50:40.606839   74584 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 09:50:40.607006   74584 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 09:50:40.607114   74584 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 09:50:40.607229   74584 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 09:50:40.607269   74584 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 09:50:40.607307   74584 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 09:50:40.607312   74584 kubeadm.go:319] 
	I1101 09:50:40.607359   74584 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 09:50:40.607364   74584 kubeadm.go:319] 
	I1101 09:50:40.607423   74584 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 09:50:40.607428   74584 kubeadm.go:319] 
	I1101 09:50:40.607448   74584 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 09:50:40.607512   74584 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 09:50:40.607591   74584 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 09:50:40.607600   74584 kubeadm.go:319] 
	I1101 09:50:40.607669   74584 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 09:50:40.607677   74584 kubeadm.go:319] 
	I1101 09:50:40.607717   74584 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 09:50:40.607722   74584 kubeadm.go:319] 
	I1101 09:50:40.607785   74584 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 09:50:40.607880   74584 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 09:50:40.607975   74584 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 09:50:40.607984   74584 kubeadm.go:319] 
	I1101 09:50:40.608100   74584 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 09:50:40.608199   74584 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 09:50:40.608211   74584 kubeadm.go:319] 
	I1101 09:50:40.608275   74584 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token n6ooj1.g2r52lt9s64k7lzx \
	I1101 09:50:40.608412   74584 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ad8ee8749587d4da67d76f75358688c9a611301f34b35f940a9e7fa320504c7a \
	I1101 09:50:40.608438   74584 kubeadm.go:319] 	--control-plane 
	I1101 09:50:40.608444   74584 kubeadm.go:319] 
	I1101 09:50:40.608584   74584 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 09:50:40.608595   74584 kubeadm.go:319] 
	I1101 09:50:40.608701   74584 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token n6ooj1.g2r52lt9s64k7lzx \
	I1101 09:50:40.608845   74584 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ad8ee8749587d4da67d76f75358688c9a611301f34b35f940a9e7fa320504c7a 
	I1101 09:50:40.608868   74584 cni.go:84] Creating CNI manager for ""
	I1101 09:50:40.608880   74584 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 09:50:40.610610   74584 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 09:50:40.612071   74584 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 09:50:40.627372   74584 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1101 09:50:40.653117   74584 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 09:50:40.653226   74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-086339 minikube.k8s.io/updated_at=2025_11_01T09_50_40_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845 minikube.k8s.io/name=addons-086339 minikube.k8s.io/primary=true
	I1101 09:50:40.653234   74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:50:40.841062   74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:50:40.841065   74584 ops.go:34] apiserver oom_adj: -16
	I1101 09:50:41.341444   74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:50:41.841738   74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:50:42.341137   74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:50:42.841859   74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:50:43.341430   74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:50:43.842032   74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:50:44.341776   74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:50:44.842146   74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:50:45.342151   74584 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:50:45.471694   74584 kubeadm.go:1114] duration metric: took 4.818566134s to wait for elevateKubeSystemPrivileges
	I1101 09:50:45.471741   74584 kubeadm.go:403] duration metric: took 18.811065248s to StartCluster
	I1101 09:50:45.471765   74584 settings.go:142] acquiring lock: {Name:mk26e3d3b2448df59827bb1be60cde1d117dbc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:50:45.471940   74584 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-70113/kubeconfig
	I1101 09:50:45.472382   74584 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/kubeconfig: {Name:mk1fa0677a0758214359bfdd6f326495ee5fd60e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:50:45.472671   74584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 09:50:45.472717   74584 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.58 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:50:45.472765   74584 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1101 09:50:45.472916   74584 addons.go:70] Setting yakd=true in profile "addons-086339"
	I1101 09:50:45.472917   74584 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-086339"
	I1101 09:50:45.472959   74584 addons.go:239] Setting addon yakd=true in "addons-086339"
	I1101 09:50:45.472963   74584 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-086339"
	I1101 09:50:45.472976   74584 addons.go:70] Setting registry=true in profile "addons-086339"
	I1101 09:50:45.472991   74584 addons.go:239] Setting addon registry=true in "addons-086339"
	I1101 09:50:45.473004   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.473010   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.473012   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.473003   74584 addons.go:70] Setting metrics-server=true in profile "addons-086339"
	I1101 09:50:45.473051   74584 addons.go:70] Setting registry-creds=true in profile "addons-086339"
	I1101 09:50:45.473068   74584 addons.go:239] Setting addon metrics-server=true in "addons-086339"
	I1101 09:50:45.473084   74584 addons.go:239] Setting addon registry-creds=true in "addons-086339"
	I1101 09:50:45.473121   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.473144   74584 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-086339"
	I1101 09:50:45.473150   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.473175   74584 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-086339"
	I1101 09:50:45.473203   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.473564   74584 addons.go:70] Setting volcano=true in profile "addons-086339"
	I1101 09:50:45.473589   74584 addons.go:239] Setting addon volcano=true in "addons-086339"
	I1101 09:50:45.473622   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.473737   74584 addons.go:70] Setting gcp-auth=true in profile "addons-086339"
	I1101 09:50:45.473786   74584 mustload.go:66] Loading cluster: addons-086339
	I1101 09:50:45.474010   74584 config.go:182] Loaded profile config "addons-086339": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:50:45.474219   74584 addons.go:70] Setting ingress-dns=true in profile "addons-086339"
	I1101 09:50:45.474254   74584 addons.go:239] Setting addon ingress-dns=true in "addons-086339"
	I1101 09:50:45.474313   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.472963   74584 config.go:182] Loaded profile config "addons-086339": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:50:45.473011   74584 addons.go:70] Setting storage-provisioner=true in profile "addons-086339"
	I1101 09:50:45.474667   74584 addons.go:239] Setting addon storage-provisioner=true in "addons-086339"
	I1101 09:50:45.474685   74584 addons.go:70] Setting cloud-spanner=true in profile "addons-086339"
	I1101 09:50:45.474699   74584 addons.go:239] Setting addon cloud-spanner=true in "addons-086339"
	I1101 09:50:45.474703   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.474721   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.474993   74584 addons.go:70] Setting volumesnapshots=true in profile "addons-086339"
	I1101 09:50:45.475011   74584 addons.go:239] Setting addon volumesnapshots=true in "addons-086339"
	I1101 09:50:45.475031   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.475344   74584 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-086339"
	I1101 09:50:45.475368   74584 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-086339"
	I1101 09:50:45.475372   74584 addons.go:70] Setting default-storageclass=true in profile "addons-086339"
	I1101 09:50:45.475392   74584 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-086339"
	I1101 09:50:45.475482   74584 addons.go:70] Setting ingress=true in profile "addons-086339"
	I1101 09:50:45.475497   74584 addons.go:239] Setting addon ingress=true in "addons-086339"
	I1101 09:50:45.475549   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.474669   74584 addons.go:70] Setting inspektor-gadget=true in profile "addons-086339"
	I1101 09:50:45.475789   74584 addons.go:239] Setting addon inspektor-gadget=true in "addons-086339"
	I1101 09:50:45.475796   74584 out.go:179] * Verifying Kubernetes components...
	I1101 09:50:45.475819   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.474680   74584 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-086339"
	I1101 09:50:45.476065   74584 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-086339"
	I1101 09:50:45.476115   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.477255   74584 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:50:45.480031   74584 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1101 09:50:45.480031   74584 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1101 09:50:45.480033   74584 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	W1101 09:50:45.481113   74584 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1101 09:50:45.481446   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.484726   74584 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 09:50:45.484753   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1101 09:50:45.484938   74584 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1101 09:50:45.484960   74584 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 09:50:45.484966   74584 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1101 09:50:45.484973   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1101 09:50:45.485125   74584 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1101 09:50:45.485153   74584 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1101 09:50:45.485273   74584 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-086339"
	I1101 09:50:45.485691   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.485920   74584 addons.go:239] Setting addon default-storageclass=true in "addons-086339"
	I1101 09:50:45.485962   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:45.487450   74584 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1101 09:50:45.487459   74584 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1101 09:50:45.487484   74584 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1101 09:50:45.487497   74584 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1101 09:50:45.487517   74584 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:50:45.487560   74584 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1101 09:50:45.487563   74584 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1101 09:50:45.488316   74584 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1101 09:50:45.488329   74584 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 09:50:45.488348   74584 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 09:50:45.489625   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1101 09:50:45.489651   74584 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 09:50:45.489699   74584 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:50:45.489902   74584 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:50:45.490208   74584 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1101 09:50:45.490224   74584 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1101 09:50:45.490262   74584 addons.go:436] installing /etc/kubernetes/addons/ig-crd.yaml
	I1101 09:50:45.490750   74584 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1101 09:50:45.491163   74584 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1101 09:50:45.491557   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1101 09:50:45.491173   74584 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1101 09:50:45.491207   74584 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:50:45.491713   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:50:45.491208   74584 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 09:50:45.491791   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1101 09:50:45.491917   74584 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 09:50:45.492081   74584 out.go:179]   - Using image docker.io/registry:3.0.0
	I1101 09:50:45.492774   74584 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1101 09:50:45.493050   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.493676   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.494048   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.494216   74584 out.go:179]   - Using image docker.io/busybox:stable
	I1101 09:50:45.494271   74584 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1101 09:50:45.494283   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1101 09:50:45.494189   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:45.494412   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.495222   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:45.495346   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.495450   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:45.495550   74584 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1101 09:50:45.495608   74584 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 09:50:45.495670   74584 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 09:50:45.495688   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1101 09:50:45.495797   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:45.495840   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.496406   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:45.496819   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:45.497603   74584 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 09:50:45.497622   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1101 09:50:45.498607   74584 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1101 09:50:45.500140   74584 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1101 09:50:45.500156   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.500745   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.500905   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.501448   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:45.501490   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.501945   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.502137   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.502129   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:45.502357   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:45.502386   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.502479   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.502618   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:45.502659   74584 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1101 09:50:45.502626   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:45.502671   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.502621   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.503336   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:45.503381   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.503456   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:45.503481   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.503494   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.503740   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:45.503831   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:45.503858   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.503858   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:45.503886   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:45.504294   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:45.504670   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:45.504706   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.504708   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.504783   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:45.504812   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.504989   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:45.505241   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:45.505275   74584 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1101 09:50:45.505416   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:45.505439   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.505646   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:45.505919   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.506301   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:45.506330   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.506479   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:45.506657   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.507207   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:45.507243   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.507456   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:45.507843   74584 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1101 09:50:45.509235   74584 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1101 09:50:45.509251   74584 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1101 09:50:45.511923   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.512313   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:45.512339   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:45.512478   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	W1101 09:50:45.863592   74584 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:56024->192.168.39.58:22: read: connection reset by peer
	I1101 09:50:45.863626   74584 retry.go:31] will retry after 353.468022ms: ssh: handshake failed: read tcp 192.168.39.1:56024->192.168.39.58:22: read: connection reset by peer
	W1101 09:50:45.863706   74584 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:56030->192.168.39.58:22: read: connection reset by peer
	I1101 09:50:45.863718   74584 retry.go:31] will retry after 366.435822ms: ssh: handshake failed: read tcp 192.168.39.1:56030->192.168.39.58:22: read: connection reset by peer
	I1101 09:50:46.204700   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:50:46.344397   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 09:50:46.364416   74584 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 09:50:46.364443   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1101 09:50:46.382914   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 09:50:46.401116   74584 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1101 09:50:46.401152   74584 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1101 09:50:46.499674   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:50:46.525387   74584 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1101 09:50:46.525422   74584 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1101 09:50:46.528653   74584 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:50:46.528683   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1101 09:50:46.537039   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 09:50:46.585103   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1101 09:50:46.700077   74584 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1101 09:50:46.700117   74584 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1101 09:50:46.802990   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 09:50:46.845193   74584 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1101 09:50:46.845228   74584 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1101 09:50:46.948887   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:50:47.114091   74584 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1101 09:50:47.114126   74584 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1101 09:50:47.173908   74584 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.701178901s)
	I1101 09:50:47.173921   74584 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.696642998s)
	I1101 09:50:47.173999   74584 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:50:47.174095   74584 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 09:50:47.203736   74584 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 09:50:47.203782   74584 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 09:50:47.327504   74584 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1101 09:50:47.327541   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1101 09:50:47.447307   74584 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1101 09:50:47.447333   74584 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1101 09:50:47.479289   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 09:50:47.516143   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 09:50:47.537776   74584 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1101 09:50:47.537808   74584 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1101 09:50:47.602456   74584 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 09:50:47.602492   74584 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 09:50:47.634301   74584 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1101 09:50:47.634334   74584 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1101 09:50:47.666382   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1101 09:50:47.896414   74584 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1101 09:50:47.896454   74584 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1101 09:50:48.070881   74584 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1101 09:50:48.070918   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1101 09:50:48.088172   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 09:50:48.112581   74584 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1101 09:50:48.112615   74584 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1101 09:50:48.384804   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.180058223s)
	I1101 09:50:48.433222   74584 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1101 09:50:48.433251   74584 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1101 09:50:48.570103   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1101 09:50:48.712201   74584 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1101 09:50:48.712239   74584 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1101 09:50:48.761409   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (2.41696863s)
	I1101 09:50:49.019503   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.636542693s)
	I1101 09:50:49.055833   74584 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1101 09:50:49.055864   74584 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1101 09:50:49.130302   74584 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 09:50:49.130330   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1101 09:50:49.321757   74584 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1101 09:50:49.321783   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1101 09:50:49.571119   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 09:50:49.804708   74584 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1101 09:50:49.804738   74584 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1101 09:50:49.962509   74584 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1101 09:50:49.962544   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1101 09:50:50.281087   74584 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1101 09:50:50.281117   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1101 09:50:50.772055   74584 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 09:50:50.772080   74584 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1101 09:50:51.239409   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 09:50:52.962797   74584 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1101 09:50:52.966311   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:52.966764   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:52.966789   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:52.966934   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:53.227038   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.727328057s)
	I1101 09:50:53.227151   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.69006708s)
	I1101 09:50:53.227189   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.642046598s)
	I1101 09:50:53.227242   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.424224705s)
	I1101 09:50:53.376728   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (6.427801852s)
	W1101 09:50:53.376771   74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:50:53.376826   74584 retry.go:31] will retry after 359.696332ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:50:53.376871   74584 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.202843079s)
	I1101 09:50:53.376921   74584 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (6.202805311s)
	I1101 09:50:53.376950   74584 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1101 09:50:53.377909   74584 node_ready.go:35] waiting up to 6m0s for node "addons-086339" to be "Ready" ...
	I1101 09:50:53.462748   74584 node_ready.go:49] node "addons-086339" is "Ready"
	I1101 09:50:53.462778   74584 node_ready.go:38] duration metric: took 84.807458ms for node "addons-086339" to be "Ready" ...
	I1101 09:50:53.462793   74584 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:50:53.462847   74584 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:50:53.534003   74584 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1101 09:50:53.650576   74584 addons.go:239] Setting addon gcp-auth=true in "addons-086339"
	I1101 09:50:53.650630   74584 host.go:66] Checking if "addons-086339" exists ...
	I1101 09:50:53.652687   74584 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1101 09:50:53.655511   74584 main.go:143] libmachine: domain addons-086339 has defined MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:53.655896   74584 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:a4:85", ip: ""} in network mk-addons-086339: {Iface:virbr1 ExpiryTime:2025-11-01 10:50:14 +0000 UTC Type:0 Mac:52:54:00:b9:a4:85 Iaid: IPaddr:192.168.39.58 Prefix:24 Hostname:addons-086339 Clientid:01:52:54:00:b9:a4:85}
	I1101 09:50:53.655920   74584 main.go:143] libmachine: domain addons-086339 has defined IP address 192.168.39.58 and MAC address 52:54:00:b9:a4:85 in network mk-addons-086339
	I1101 09:50:53.656060   74584 sshutil.go:53] new ssh client: &{IP:192.168.39.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/addons-086339/id_rsa Username:docker}
	I1101 09:50:53.737577   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:50:53.969325   74584 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-086339" context rescaled to 1 replicas
	I1101 09:50:55.148780   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.669443662s)
	I1101 09:50:55.148826   74584 addons.go:480] Verifying addon ingress=true in "addons-086339"
	I1101 09:50:55.148852   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.632675065s)
	I1101 09:50:55.148956   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.482535978s)
	I1101 09:50:55.149057   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.060852546s)
	I1101 09:50:55.149064   74584 addons.go:480] Verifying addon registry=true in "addons-086339"
	I1101 09:50:55.149094   74584 addons.go:480] Verifying addon metrics-server=true in "addons-086339"
	I1101 09:50:55.149162   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.579011593s)
	I1101 09:50:55.150934   74584 out.go:179] * Verifying ingress addon...
	I1101 09:50:55.150992   74584 out.go:179] * Verifying registry addon...
	I1101 09:50:55.151019   74584 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-086339 service yakd-dashboard -n yakd-dashboard
	
	I1101 09:50:55.152636   74584 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1101 09:50:55.152833   74584 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1101 09:50:55.236576   74584 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1101 09:50:55.236603   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:50:55.236704   74584 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1101 09:50:55.236726   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:50:55.608860   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.037686923s)
	W1101 09:50:55.608910   74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 09:50:55.608932   74584 retry.go:31] will retry after 233.800882ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 09:50:55.697978   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:50:55.698030   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:50:55.843247   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 09:50:56.241749   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:50:56.241968   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:50:56.550655   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.311175816s)
	I1101 09:50:56.550716   74584 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-086339"
	I1101 09:50:56.550663   74584 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.087794232s)
	I1101 09:50:56.550810   74584 api_server.go:72] duration metric: took 11.078058308s to wait for apiserver process to appear ...
	I1101 09:50:56.550891   74584 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:50:56.550935   74584 api_server.go:253] Checking apiserver healthz at https://192.168.39.58:8443/healthz ...
	I1101 09:50:56.552309   74584 out.go:179] * Verifying csi-hostpath-driver addon...
	I1101 09:50:56.554454   74584 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1101 09:50:56.566874   74584 api_server.go:279] https://192.168.39.58:8443/healthz returned 200:
	ok
	I1101 09:50:56.569220   74584 api_server.go:141] control plane version: v1.34.1
	I1101 09:50:56.569247   74584 api_server.go:131] duration metric: took 18.347182ms to wait for apiserver health ...
	I1101 09:50:56.569258   74584 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:50:56.586752   74584 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1101 09:50:56.586776   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:50:56.587214   74584 system_pods.go:59] 20 kube-system pods found
	I1101 09:50:56.587266   74584 system_pods.go:61] "amd-gpu-device-plugin-lr4lw" [bee1e3ae-5d43-4b43-a348-0e04ec066093] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 09:50:56.587277   74584 system_pods.go:61] "coredns-66bc5c9577-5v6h7" [ff58ca9c-6949-4ab8-b8ff-8be8e7b75757] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:50:56.587289   74584 system_pods.go:61] "coredns-66bc5c9577-vsbrs" [c3a65dae-82f4-4f33-b460-fa45a39b3342] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:50:56.587297   74584 system_pods.go:61] "csi-hostpath-attacher-0" [50e03a30-f2e9-4ec1-ba85-6da2654030c9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 09:50:56.587304   74584 system_pods.go:61] "csi-hostpath-resizer-0" [d2c565f0-80a3-4b2d-a99b-edc1d7ae4fe2] Pending
	I1101 09:50:56.587318   74584 system_pods.go:61] "csi-hostpathplugin-z7vjp" [96e87cd6-068d-40af-9966-b875b9a7629e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 09:50:56.587325   74584 system_pods.go:61] "etcd-addons-086339" [f17e5eab-51c0-409a-9bb3-3cb5e71200fd] Running
	I1101 09:50:56.587336   74584 system_pods.go:61] "kube-apiserver-addons-086339" [51b3d29f-af5e-441a-b3c0-754241fc92bc] Running
	I1101 09:50:56.587343   74584 system_pods.go:61] "kube-controller-manager-addons-086339" [62d54b81-f6bc-4bdc-bd22-c8a6fc39a043] Running
	I1101 09:50:56.587352   74584 system_pods.go:61] "kube-ingress-dns-minikube" [e328fd3e-a381-414d-ba99-1aa6f7f40585] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 09:50:56.587357   74584 system_pods.go:61] "kube-proxy-7fck9" [a834adcc-b0ec-4cad-8944-bea90a627787] Running
	I1101 09:50:56.587365   74584 system_pods.go:61] "kube-scheduler-addons-086339" [4db76834-5184-4a83-a228-35e83abc8c9d] Running
	I1101 09:50:56.587372   74584 system_pods.go:61] "metrics-server-85b7d694d7-6lx9r" [c4e44e90-7d77-43fc-913f-f26877e52760] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:50:56.587378   74584 system_pods.go:61] "nvidia-device-plugin-daemonset-jh2xq" [0a9234e2-8d6a-4110-86be-ff05f9be1a29] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 09:50:56.587387   74584 system_pods.go:61] "registry-6b586f9694-8zvc5" [23d65f21-71d0-4da4-8f2f-5b59f93f9085] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:50:56.587395   74584 system_pods.go:61] "registry-creds-764b6fb674-ztjtq" [ae641ce9-b248-46a3-8e01-9d25e8d29825] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:50:56.587408   74584 system_pods.go:61] "registry-proxy-4p4n9" [73d260fc-8c68-439c-a460-208cdb29b271] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 09:50:56.587416   74584 system_pods.go:61] "snapshot-controller-7d9fbc56b8-4kwxj" [e301a0c5-17dc-43be-9fd5-c14b76c1b92c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:50:56.587429   74584 system_pods.go:61] "snapshot-controller-7d9fbc56b8-wzgp7" [4c770fa7-174c-43ab-ac63-635b19152843] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:50:56.587437   74584 system_pods.go:61] "storage-provisioner" [4c394064-33ff-4fd0-a4bc-afb948952ac6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:50:56.587448   74584 system_pods.go:74] duration metric: took 18.182475ms to wait for pod list to return data ...
	I1101 09:50:56.587460   74584 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:50:56.596967   74584 default_sa.go:45] found service account: "default"
	I1101 09:50:56.596990   74584 default_sa.go:55] duration metric: took 9.524828ms for default service account to be created ...
	I1101 09:50:56.596999   74584 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:50:56.613956   74584 system_pods.go:86] 20 kube-system pods found
	I1101 09:50:56.613988   74584 system_pods.go:89] "amd-gpu-device-plugin-lr4lw" [bee1e3ae-5d43-4b43-a348-0e04ec066093] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 09:50:56.613995   74584 system_pods.go:89] "coredns-66bc5c9577-5v6h7" [ff58ca9c-6949-4ab8-b8ff-8be8e7b75757] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:50:56.614003   74584 system_pods.go:89] "coredns-66bc5c9577-vsbrs" [c3a65dae-82f4-4f33-b460-fa45a39b3342] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:50:56.614009   74584 system_pods.go:89] "csi-hostpath-attacher-0" [50e03a30-f2e9-4ec1-ba85-6da2654030c9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 09:50:56.614014   74584 system_pods.go:89] "csi-hostpath-resizer-0" [d2c565f0-80a3-4b2d-a99b-edc1d7ae4fe2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 09:50:56.614020   74584 system_pods.go:89] "csi-hostpathplugin-z7vjp" [96e87cd6-068d-40af-9966-b875b9a7629e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 09:50:56.614023   74584 system_pods.go:89] "etcd-addons-086339" [f17e5eab-51c0-409a-9bb3-3cb5e71200fd] Running
	I1101 09:50:56.614028   74584 system_pods.go:89] "kube-apiserver-addons-086339" [51b3d29f-af5e-441a-b3c0-754241fc92bc] Running
	I1101 09:50:56.614033   74584 system_pods.go:89] "kube-controller-manager-addons-086339" [62d54b81-f6bc-4bdc-bd22-c8a6fc39a043] Running
	I1101 09:50:56.614040   74584 system_pods.go:89] "kube-ingress-dns-minikube" [e328fd3e-a381-414d-ba99-1aa6f7f40585] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 09:50:56.614045   74584 system_pods.go:89] "kube-proxy-7fck9" [a834adcc-b0ec-4cad-8944-bea90a627787] Running
	I1101 09:50:56.614051   74584 system_pods.go:89] "kube-scheduler-addons-086339" [4db76834-5184-4a83-a228-35e83abc8c9d] Running
	I1101 09:50:56.614058   74584 system_pods.go:89] "metrics-server-85b7d694d7-6lx9r" [c4e44e90-7d77-43fc-913f-f26877e52760] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:50:56.614073   74584 system_pods.go:89] "nvidia-device-plugin-daemonset-jh2xq" [0a9234e2-8d6a-4110-86be-ff05f9be1a29] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 09:50:56.614089   74584 system_pods.go:89] "registry-6b586f9694-8zvc5" [23d65f21-71d0-4da4-8f2f-5b59f93f9085] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:50:56.614095   74584 system_pods.go:89] "registry-creds-764b6fb674-ztjtq" [ae641ce9-b248-46a3-8e01-9d25e8d29825] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:50:56.614100   74584 system_pods.go:89] "registry-proxy-4p4n9" [73d260fc-8c68-439c-a460-208cdb29b271] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 09:50:56.614105   74584 system_pods.go:89] "snapshot-controller-7d9fbc56b8-4kwxj" [e301a0c5-17dc-43be-9fd5-c14b76c1b92c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:50:56.614114   74584 system_pods.go:89] "snapshot-controller-7d9fbc56b8-wzgp7" [4c770fa7-174c-43ab-ac63-635b19152843] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:50:56.614118   74584 system_pods.go:89] "storage-provisioner" [4c394064-33ff-4fd0-a4bc-afb948952ac6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:50:56.614126   74584 system_pods.go:126] duration metric: took 17.122448ms to wait for k8s-apps to be running ...
	I1101 09:50:56.614136   74584 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:50:56.614196   74584 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:50:56.662305   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:50:56.676451   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:50:57.009640   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.27202291s)
	W1101 09:50:57.009684   74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:50:57.009709   74584 retry.go:31] will retry after 295.092784ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:50:57.009722   74584 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.357005393s)
	I1101 09:50:57.011440   74584 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 09:50:57.012826   74584 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1101 09:50:57.014068   74584 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1101 09:50:57.014084   74584 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1101 09:50:57.060410   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:50:57.092501   74584 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1101 09:50:57.092526   74584 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1101 09:50:57.163456   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:50:57.166739   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:50:57.235815   74584 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 09:50:57.235844   74584 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1101 09:50:57.305656   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:50:57.336319   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 09:50:57.561645   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:50:57.662574   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:50:57.663877   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:50:58.063249   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:50:58.157346   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:50:58.162591   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:50:58.566038   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:50:58.574812   74584 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.96059055s)
	I1101 09:50:58.574848   74584 system_svc.go:56] duration metric: took 1.960707525s WaitForService to wait for kubelet
	I1101 09:50:58.574856   74584 kubeadm.go:587] duration metric: took 13.102108035s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:50:58.574874   74584 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:50:58.575108   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.73180936s)
	I1101 09:50:58.586405   74584 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 09:50:58.586436   74584 node_conditions.go:123] node cpu capacity is 2
	I1101 09:50:58.586457   74584 node_conditions.go:105] duration metric: took 11.577545ms to run NodePressure ...
	I1101 09:50:58.586472   74584 start.go:242] waiting for startup goroutines ...
	I1101 09:50:58.664635   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:50:58.665016   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:50:59.063972   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:50:59.170042   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:50:59.176798   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:50:59.577259   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:50:59.664063   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:50:59.665180   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:00.063306   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:00.173864   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:00.174338   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.868634982s)
	W1101 09:51:00.174389   74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:00.174423   74584 retry.go:31] will retry after 509.276592ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:00.174461   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.838092131s)
	I1101 09:51:00.175590   74584 addons.go:480] Verifying addon gcp-auth=true in "addons-086339"
	I1101 09:51:00.176082   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:00.177144   74584 out.go:179] * Verifying gcp-auth addon...
	I1101 09:51:00.179153   74584 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1101 09:51:00.185078   74584 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1101 09:51:00.185104   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:00.569905   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:00.666711   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:00.668288   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:00.684564   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:51:00.685802   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:01.058804   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:01.162413   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:01.162519   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:01.184967   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:01.561792   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:01.660578   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:01.660604   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:01.687510   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:02.048703   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.364096236s)
	W1101 09:51:02.048744   74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:02.048770   74584 retry.go:31] will retry after 922.440306ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:02.058033   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:02.156454   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:02.156517   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:02.184626   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:02.560632   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:02.663377   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:02.663392   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:02.682802   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:02.972204   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:51:03.066417   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:03.162498   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:03.164331   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:03.185238   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:03.558965   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:03.660685   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:03.662797   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:03.683857   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:03.988155   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.015906584s)
	W1101 09:51:03.988197   74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:03.988221   74584 retry.go:31] will retry after 1.512024934s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:04.059661   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:04.158989   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:04.159171   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:04.184262   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:04.559848   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:04.665219   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:04.666152   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:04.684684   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:05.059373   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:05.157706   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:05.158120   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:05.184998   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:05.500748   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:51:05.560240   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:05.659023   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:05.660031   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:05.684729   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:06.059474   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:06.157196   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:06.157311   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:06.182088   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 09:51:06.269741   74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:06.269786   74584 retry.go:31] will retry after 2.204116799s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:06.559209   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:06.657408   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:06.657492   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:06.683284   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:07.059744   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:07.160264   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:07.160549   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:07.183753   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:07.558791   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:07.658454   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:07.662675   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:07.684198   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:08.065874   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:08.160732   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:08.161495   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:08.182870   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:08.474158   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:51:08.564218   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:08.659007   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:08.661853   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:08.684365   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:09.062466   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:09.159228   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:09.159372   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:09.183927   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:09.561230   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:09.664415   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:09.666273   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:09.684865   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:09.700010   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.225813085s)
	W1101 09:51:09.700056   74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:09.700081   74584 retry.go:31] will retry after 3.484047661s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:10.059617   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:10.156799   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:10.156883   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:10.183999   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:10.560483   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:10.661603   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:10.661780   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:10.686351   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:11.081718   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:11.188353   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:11.188507   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:11.188624   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:11.558634   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:11.660662   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:11.663221   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:11.683762   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:12.059387   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:12.156602   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:12.156961   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:12.183069   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:12.558360   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:12.657779   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:12.659195   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:12.684167   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:13.059425   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:13.159273   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:13.159720   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:13.182662   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:13.184729   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:51:13.558837   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:13.659127   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:13.659431   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:13.682290   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 09:51:14.013627   74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:14.013674   74584 retry.go:31] will retry after 3.772853511s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:14.060473   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:14.168480   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:14.168525   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:14.195048   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:14.559885   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:14.655949   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:14.656674   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:14.682561   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:15.059773   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:15.158683   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:15.158997   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:15.185198   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:15.559183   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:15.657568   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:15.657667   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:15.683337   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:16.059611   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:16.156727   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:16.158488   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:16.182596   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:16.558923   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:16.656902   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:16.657753   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:16.683813   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:17.059799   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:17.157794   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:17.158058   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:17.183320   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:17.562511   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:17.661802   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:17.663610   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:17.683753   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:17.786898   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:51:18.062486   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:18.165903   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:18.166305   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:18.185036   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:18.563358   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:18.661780   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:18.664168   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:18.686501   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:19.062933   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:19.159993   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.373047606s)
	W1101 09:51:19.160054   74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:19.160090   74584 retry.go:31] will retry after 8.062833615s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:19.160265   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:19.161792   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:19.187129   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:19.562165   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:19.662490   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:19.662887   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:19.685224   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:20.062452   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:20.158649   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:20.158963   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:20.185553   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:20.560324   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:20.663470   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:20.664773   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:20.687217   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:21.058336   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:21.158067   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:21.158764   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:21.184179   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:21.562709   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:21.660636   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:21.661331   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:21.683251   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:22.058468   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:22.158449   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:22.161441   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:22.183647   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:22.559209   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:22.657596   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:22.658067   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:22.684022   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:23.060587   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:23.159313   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:23.160492   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:23.183233   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:23.577231   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:23.658412   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:23.661233   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:23.684740   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:24.059042   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:24.157394   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:24.158911   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:24.182864   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:24.559933   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:24.657638   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:24.661214   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:24.686127   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:25.059953   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:25.158151   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:25.160939   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:25.183657   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:25.565339   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:25.663990   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:25.664201   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:25.683465   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:26.059376   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:26.158991   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:26.159088   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:26.184884   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:26.559386   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:26.657922   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:26.660583   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:26.683688   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:27.058939   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:27.156101   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:27.156998   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:27.182909   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:27.224025   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:51:27.562477   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:27.660651   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:27.662259   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:27.681905   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:28.059984   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:28.160493   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:28.162286   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:28.186135   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 09:51:28.200979   74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:28.201029   74584 retry.go:31] will retry after 10.395817371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:28.558989   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:28.657430   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:28.660330   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:28.683885   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:29.061934   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:29.157765   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:29.157917   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:29.184278   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:29.560897   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:29.657774   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:29.657838   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:29.683106   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:30.059693   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:30.160732   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:51:30.166378   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:30.265635   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:30.558787   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:30.656060   74584 kapi.go:107] duration metric: took 35.503223323s to wait for kubernetes.io/minikube-addons=registry ...
	I1101 09:51:30.656373   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:30.682215   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:31.059187   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:31.157561   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:31.258067   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:31.560106   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:31.657305   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:31.683226   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:32.059058   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:32.158395   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:32.182943   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:32.559674   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:32.660135   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:32.684028   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:33.059220   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:33.159029   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:33.189054   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:33.699380   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:33.699471   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:33.700370   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:34.059307   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:34.158409   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:34.189459   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:34.558736   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:34.656864   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:34.682855   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:35.058847   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:35.156770   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:35.182411   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:35.559605   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:35.657060   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:35.682886   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:36.059230   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:36.158265   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:36.185067   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:36.562462   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:36.657785   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:36.684734   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:37.059270   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:37.156638   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:37.184172   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:37.558438   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:37.656955   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:37.684255   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:38.061827   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:38.157365   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:38.182685   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:38.560831   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:38.597843   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:51:38.656804   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:38.686009   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:39.061543   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:39.158425   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:39.183760   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:39.559306   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:39.657197   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:39.684893   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:39.748441   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.150549422s)
	W1101 09:51:39.748504   74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:39.748545   74584 retry.go:31] will retry after 20.354212059s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:51:40.091278   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:40.159135   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:40.189976   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:40.561293   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:40.657506   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:40.682812   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:41.059036   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:41.157077   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:41.183024   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:41.560657   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:41.662059   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:41.686139   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:42.059712   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:42.158078   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:42.184717   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:42.558428   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:42.657474   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:42.682401   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:43.061067   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:43.159023   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:43.182945   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:43.559721   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:43.658905   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:43.683665   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:44.059768   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:44.156686   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:44.182520   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:44.558486   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:44.659410   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:44.686714   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:45.059691   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:45.161012   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:45.186846   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:45.566991   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:45.661771   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:45.683563   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:46.061274   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:46.157945   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:46.184842   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:46.559462   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:46.659702   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:46.682680   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:47.058242   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:47.159894   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:47.185416   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:47.561755   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:47.660011   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:47.683518   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:48.061815   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:48.158606   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:48.186741   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:48.562551   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:48.660513   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:48.683374   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:49.061955   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:49.158516   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:49.182835   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:49.558347   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:49.660756   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:49.685651   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:50.059457   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:50.161169   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:50.185382   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:50.560490   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:50.667931   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:50.691744   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:51.060229   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:51.163272   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:51.185468   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:51.561847   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:51.657559   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:51.684472   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:52.065897   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:52.165405   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:52.184183   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:52.558429   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:52.659763   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:52.687124   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:53.060334   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:53.159793   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:53.260599   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:53.836679   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:53.844731   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:53.846382   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:54.061169   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:54.160164   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:54.184130   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:54.559624   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:54.660771   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:54.683387   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:55.060182   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:55.158098   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:55.184607   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:55.568135   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:55.666901   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:55.688352   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:56.061312   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:56.160289   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:56.183561   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:56.559442   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:56.666114   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:56.686070   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:57.059598   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:57.157253   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:57.184083   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:57.559370   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:57.657282   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:57.684369   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:58.059645   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:58.160950   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:58.183605   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:58.559980   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:58.660720   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:58.682723   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:59.061658   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:59.161368   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:59.186554   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:51:59.562493   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:51:59.658000   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:51:59.686396   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:00.059261   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:00.103310   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:52:00.158774   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:00.183231   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:00.562324   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:00.659611   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:00.682795   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:01.061408   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:01.158866   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:01.188200   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:01.344727   74584 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.241365643s)
	W1101 09:52:01.344783   74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:52:01.344810   74584 retry.go:31] will retry after 24.70836809s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:52:01.558702   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:01.657288   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:01.683224   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:02.061177   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:02.158031   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:02.185134   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:02.559729   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:02.661884   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:02.684276   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:03.058102   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:03.159115   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:03.184840   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:03.559718   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:03.658993   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:03.682755   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:04.061600   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:04.157504   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:04.182206   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:04.558833   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:04.658122   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:04.690795   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:05.060282   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:05.159649   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:05.182512   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:05.558584   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:05.657372   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:05.682747   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:06.059347   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:06.156954   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:06.184088   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:06.559677   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:06.657737   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:06.683063   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:07.058922   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:07.156647   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:07.183210   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:07.559741   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:07.656366   74584 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:52:07.684732   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:08.060305   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:08.161326   74584 kapi.go:107] duration metric: took 1m13.008685899s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1101 09:52:08.184485   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:08.563527   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:08.684225   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:09.062454   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:09.183134   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:09.559703   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:09.683034   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:10.059517   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:10.183595   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:10.559051   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:10.684292   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:11.060725   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:11.184057   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:11.560407   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:11.684061   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:12.059623   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:12.338951   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:12.563238   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:12.687086   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:52:13.065805   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:13.186970   74584 kapi.go:107] duration metric: took 1m13.007813603s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1101 09:52:13.188654   74584 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-086339 cluster.
	I1101 09:52:13.190102   74584 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1101 09:52:13.191551   74584 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1101 09:52:13.561959   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:14.059590   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:14.558397   74584 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:52:15.059526   74584 kapi.go:107] duration metric: took 1m18.505070405s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1101 09:52:26.053439   74584 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1101 09:52:26.787218   74584 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 09:52:26.787354   74584 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1101 09:52:26.789142   74584 out.go:179] * Enabled addons: default-storageclass, registry-creds, amd-gpu-device-plugin, storage-provisioner, nvidia-device-plugin, cloud-spanner, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1101 09:52:26.790527   74584 addons.go:515] duration metric: took 1m41.317758805s for enable addons: enabled=[default-storageclass registry-creds amd-gpu-device-plugin storage-provisioner nvidia-device-plugin cloud-spanner ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1101 09:52:26.790585   74584 start.go:247] waiting for cluster config update ...
	I1101 09:52:26.790606   74584 start.go:256] writing updated cluster config ...
	I1101 09:52:26.790869   74584 ssh_runner.go:195] Run: rm -f paused
	I1101 09:52:26.797220   74584 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:52:26.802135   74584 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-vsbrs" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:52:26.807671   74584 pod_ready.go:94] pod "coredns-66bc5c9577-vsbrs" is "Ready"
	I1101 09:52:26.807696   74584 pod_ready.go:86] duration metric: took 5.533544ms for pod "coredns-66bc5c9577-vsbrs" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:52:26.809972   74584 pod_ready.go:83] waiting for pod "etcd-addons-086339" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:52:26.815396   74584 pod_ready.go:94] pod "etcd-addons-086339" is "Ready"
	I1101 09:52:26.815421   74584 pod_ready.go:86] duration metric: took 5.421578ms for pod "etcd-addons-086339" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:52:26.818352   74584 pod_ready.go:83] waiting for pod "kube-apiserver-addons-086339" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:52:26.823369   74584 pod_ready.go:94] pod "kube-apiserver-addons-086339" is "Ready"
	I1101 09:52:26.823403   74584 pod_ready.go:86] duration metric: took 5.02397ms for pod "kube-apiserver-addons-086339" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:52:26.825247   74584 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-086339" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:52:27.201328   74584 pod_ready.go:94] pod "kube-controller-manager-addons-086339" is "Ready"
	I1101 09:52:27.201355   74584 pod_ready.go:86] duration metric: took 376.08311ms for pod "kube-controller-manager-addons-086339" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:52:27.402263   74584 pod_ready.go:83] waiting for pod "kube-proxy-7fck9" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:52:27.802591   74584 pod_ready.go:94] pod "kube-proxy-7fck9" is "Ready"
	I1101 09:52:27.802625   74584 pod_ready.go:86] duration metric: took 400.328354ms for pod "kube-proxy-7fck9" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:52:28.002425   74584 pod_ready.go:83] waiting for pod "kube-scheduler-addons-086339" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:52:28.401943   74584 pod_ready.go:94] pod "kube-scheduler-addons-086339" is "Ready"
	I1101 09:52:28.401969   74584 pod_ready.go:86] duration metric: took 399.516912ms for pod "kube-scheduler-addons-086339" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:52:28.401979   74584 pod_ready.go:40] duration metric: took 1.604730154s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:52:28.446357   74584 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 09:52:28.448281   74584 out.go:179] * Done! kubectl is now configured to use "addons-086339" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 09:55:56 addons-086339 crio[826]: time="2025-11-01 09:55:56.479160539Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761990956479136603,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:511388,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=43785482-c5bd-4d23-8cda-609dd7edbcd6 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:55:56 addons-086339 crio[826]: time="2025-11-01 09:55:56.479794706Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9f131a59-d9fd-4c4e-b0ad-78ffff96144f name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:55:56 addons-086339 crio[826]: time="2025-11-01 09:55:56.479938542Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9f131a59-d9fd-4c4e-b0ad-78ffff96144f name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:55:56 addons-086339 crio[826]: time="2025-11-01 09:55:56.480423739Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d8f9ab035f10b883f89c331d67218f109856992b9b069efdae0a16a908bf656d,PodSandboxId:ecbb6e0269dbe5206ee40e41cf202e8a0f1fc8985220bca67dd2abcee664753f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761990753121450389,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bd0f0b90-ebd1-434e-86db-7717f59bb0b2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54e24bc9ad0bcf2346a54dba46c112594d3456f8e0851e42ae540839ab98ade7,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1761990734122207031,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28195b893a436d57c90ecac8b5fe73e5c1511f1415dc342be8113625a0b8d79a,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1761990729102734319,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60f64f1e1264248ca86d3e8ea17c90635c9d479311fe8d5ea622b661f0068bd6,PodSandboxId:b2e63f129e7cad5f03427260dc3589db4cecd4b45329bdb1e1023738a84b3985,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761990727410551183,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-g7dks,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a4165ee4-5d09-49d4-a0c1-f663b2084a0d,},Annotations:map[string]
string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:68f7cffce3b8170ddcfc4c830b594fbca731822c5a5c3c0fac39926a07fb45b5,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1761990719918712512,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be3e64d3efaef7385f69804094ceefebb9c929b31f339e7185c4ea6397ea2ccd,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1761990718492685647,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49ddab9d49bd9e1c4a3cbea6a7d518a89881a1bd73e967f274d22a36f1dfea5f,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1761990716814923539,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ada47aa6071a7356f4a8da278d3fc6aca1557ba5c9a0099793d41309eba1008,PodSandboxId:943b7b78175f24034b62c4341008ce9ed69d78515991694c43fd13b4f6ac1fc1,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1761990715391394480,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2c565f0-80a3-4b2d-a99b-edc1d7ae4fe2,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb32e4824fbd208045e8bad8dfedff1f23e927c32f1b62fae12233696df589bb,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name
:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1761990713966475127,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4b410307ca2339c52c3d12763e6b754600ea116c26c1df56bd5b04a1a68661d,PodSa
ndboxId:48e637e86e4493f303489a52457e2b59ba63b33cc608f38bb21f8e651a9e1571,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761990712169152493,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dw6sn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 51ccb987-b8f5-42f1-af70-9d22dd5ca2ec,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:441d236fd7e39d9b3a5cc6b3bb8961ce35e
c981238120611cdcc3cb61d7735b1,PodSandboxId:ed25ff910660155f553fa5ca020216deb811acfe62895c250c2f4116f4e42adf,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1761990712051266696,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50e03a30-f2e9-4ec1-ba85-6da2654030c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bde07abd1
e0966a5ed194175cec37ddb2ab38d4771b5729d05571ed5072606a8,PodSandboxId:28091fc92ecf572bc5fbe283ff9cafe33ce0a67c0edcc4cbdfa901ef366642c5,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761990709762301697,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-4kwxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e301a0c5-17dc-43be-9fd5-c14b76c1b92c,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:764b375ef3791c087e6ad4b248126f0c7c98e6065f6bd3c282044dcc212ac1f4,PodSandboxId:1c83f726dda755d3ed283799c973eeabdf1da173f6f6ce420a3d047efb307a42,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761990709662174283,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-d7qkm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0111b6b5-409d-4b18-a391-db0a0fbe7882,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a29c1fc2c879d0e534a8cebc61baf83da09a1b3e98b2972f576c64cacf35d44,PodSandboxId:cd9903f7fe60f66bc1eee002e6b25f4b8358953184ed7bdceb69ca35d37af467,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761990708178896364,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-wzgp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c770fa7-174c-43ab-ac63-635b19152843,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ccff636c81da778123aaba73ca1c6a96114c3d9b455724fc184ea7051b61a16,PodSandboxId:ae1c1b106a1ce6fe7752079dd99dd3da08ea5c8417f73c7d2db66281343dd8bc,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761990706331554116,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-p2brt,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 7d9684ff-4d35-4cab-b655-c3fcbbfaa552,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{
\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf58580295883b8e6038335be978078035e0481b1653d560046de613e93dbc8f,PodSandboxId:0451201d3cea63a5045de49d8239fa668d5e94aaf5ad3e940d5550071ebea6cb,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1761990692227112170,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-cffvx,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.po
d.uid: bbe62e6a-a91a-428f-bbd4-b93bf597277d,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5d4912957560b7536a6c330e413b78d8074dab0b202ba22a5bc327a0cf5f8a2,PodSandboxId:8aac4234df2d12e07c37fb39a1595bd340e7adc1fe2162b211b453851a56a63d,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761990685537208680,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: e328fd3e-a381-414d-ba99-1aa6f7f40585,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:323c0222f1b7214ab99931e32355894f2f03f8261792abe4a4d2bb34fcd2969f,PodSandboxId:1c7e949564af5bc80420dc3808d3f2087aa2f9b293627ed59b78902667c1bcef,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761990655935157577,Labels:map[
string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-lr4lw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee1e3ae-5d43-4b43-a348-0e04ec066093,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6de230bb7ebf7771aad2c97275ddf43f297877d5aa072670a0a1ea8eb9c2d60f,PodSandboxId:4fbf69bbad2cf19e93c7344344fcc06babe9936500aa5bef352fd41fd55b694f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761990655486179158,Labels:map[st
ring]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c394064-33ff-4fd0-a4bc-afb948952ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a27cff89c3381e7c393bcaa8835fcfc8181cd61e7b6ab7d332528c6747943387,PodSandboxId:d7fa84c405309fb1e772e6c659810175defff8a22e42a89197e6b5a5597a8c84,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761990646997219064,Labels:map[string]string{io.kubernete
s.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vsbrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a65dae-82f4-4f33-b460-fa45a39b3342,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:260edbddb00ef801ed1131f918ebc64902a7e77ccf06a4ed7c432254423d7b66,PodSandboxId:089a55380f09729b05eee5a252927b0c79db01dc718d6007a08b5689f2ce71c5,Metadata:&ContainerMetadata{Name:
kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761990646303679370,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7fck9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a834adcc-b0ec-4cad-8944-bea90a627787,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86586375e770d45d0d93bbb47a93539a88b9dc7cdd8db120d1c9301cf9724986,PodSandboxId:47c204cffec810f2b063e0da736cf9f9a808714639f57abfa3a16da3187f96a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:
&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761990633442334233,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f64ac66b49c7412b8fa37d2ea6025670,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:195a44f107dbd39942c739e79c57d0cb5365ba4acc9d6617ae02ff7fabb66667,PodSandboxId:0780152663a4bf99a793fec09
c7dd2ddf6dc4673b89381ad0a9d3bb4248095e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761990633398671407,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c80f54e7a2ffeed9d816c83a1643dee4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a6a05d5c3b322ab0daa8e0142efe
db8b2cd9709809a366e3b02c33252f097e2,PodSandboxId:4303a653e0e77a28ad08502f1313df5bfebd24a17c8e4816f84db5f2d930a571,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761990633395979421,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ff8e16ad24795a1ca532e7aa16809a1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c9ad62c824f0689e272e9d02d572351e59f9325259ac81e62d0597c48762a5,PodSandboxId:25028e524345d4f110f0887066fc1114742e907055b01a9fcf2cb85f6e770b0b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761990633414977195,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 390b611a3c7c50f2133aad0ea70b2107,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.conta
iner.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9f131a59-d9fd-4c4e-b0ad-78ffff96144f name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:55:56 addons-086339 crio[826]: time="2025-11-01 09:55:56.523167852Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1b93998a-7759-4bb9-8179-99f7439fbb99 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:55:56 addons-086339 crio[826]: time="2025-11-01 09:55:56.523501923Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1b93998a-7759-4bb9-8179-99f7439fbb99 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:55:56 addons-086339 crio[826]: time="2025-11-01 09:55:56.525741042Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8e546d08-941f-4421-b3cc-854b3ea41c27 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:55:56 addons-086339 crio[826]: time="2025-11-01 09:55:56.527386705Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761990956527359583,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:511388,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8e546d08-941f-4421-b3cc-854b3ea41c27 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:55:56 addons-086339 crio[826]: time="2025-11-01 09:55:56.528304530Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0fc06500-8ea7-4852-a5f7-3d53b76806b9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:55:56 addons-086339 crio[826]: time="2025-11-01 09:55:56.528422949Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0fc06500-8ea7-4852-a5f7-3d53b76806b9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:55:56 addons-086339 crio[826]: time="2025-11-01 09:55:56.529202337Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d8f9ab035f10b883f89c331d67218f109856992b9b069efdae0a16a908bf656d,PodSandboxId:ecbb6e0269dbe5206ee40e41cf202e8a0f1fc8985220bca67dd2abcee664753f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761990753121450389,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bd0f0b90-ebd1-434e-86db-7717f59bb0b2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54e24bc9ad0bcf2346a54dba46c112594d3456f8e0851e42ae540839ab98ade7,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1761990734122207031,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28195b893a436d57c90ecac8b5fe73e5c1511f1415dc342be8113625a0b8d79a,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1761990729102734319,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60f64f1e1264248ca86d3e8ea17c90635c9d479311fe8d5ea622b661f0068bd6,PodSandboxId:b2e63f129e7cad5f03427260dc3589db4cecd4b45329bdb1e1023738a84b3985,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761990727410551183,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-g7dks,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a4165ee4-5d09-49d4-a0c1-f663b2084a0d,},Annotations:map[string]
string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:68f7cffce3b8170ddcfc4c830b594fbca731822c5a5c3c0fac39926a07fb45b5,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1761990719918712512,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be3e64d3efaef7385f69804094ceefebb9c929b31f339e7185c4ea6397ea2ccd,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1761990718492685647,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49ddab9d49bd9e1c4a3cbea6a7d518a89881a1bd73e967f274d22a36f1dfea5f,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1761990716814923539,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ada47aa6071a7356f4a8da278d3fc6aca1557ba5c9a0099793d41309eba1008,PodSandboxId:943b7b78175f24034b62c4341008ce9ed69d78515991694c43fd13b4f6ac1fc1,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1761990715391394480,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2c565f0-80a3-4b2d-a99b-edc1d7ae4fe2,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb32e4824fbd208045e8bad8dfedff1f23e927c32f1b62fae12233696df589bb,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name
:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1761990713966475127,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4b410307ca2339c52c3d12763e6b754600ea116c26c1df56bd5b04a1a68661d,PodSa
ndboxId:48e637e86e4493f303489a52457e2b59ba63b33cc608f38bb21f8e651a9e1571,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761990712169152493,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dw6sn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 51ccb987-b8f5-42f1-af70-9d22dd5ca2ec,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:441d236fd7e39d9b3a5cc6b3bb8961ce35e
c981238120611cdcc3cb61d7735b1,PodSandboxId:ed25ff910660155f553fa5ca020216deb811acfe62895c250c2f4116f4e42adf,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1761990712051266696,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50e03a30-f2e9-4ec1-ba85-6da2654030c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bde07abd1
e0966a5ed194175cec37ddb2ab38d4771b5729d05571ed5072606a8,PodSandboxId:28091fc92ecf572bc5fbe283ff9cafe33ce0a67c0edcc4cbdfa901ef366642c5,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761990709762301697,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-4kwxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e301a0c5-17dc-43be-9fd5-c14b76c1b92c,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:764b375ef3791c087e6ad4b248126f0c7c98e6065f6bd3c282044dcc212ac1f4,PodSandboxId:1c83f726dda755d3ed283799c973eeabdf1da173f6f6ce420a3d047efb307a42,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761990709662174283,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-d7qkm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0111b6b5-409d-4b18-a391-db0a0fbe7882,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a29c1fc2c879d0e534a8cebc61baf83da09a1b3e98b2972f576c64cacf35d44,PodSandboxId:cd9903f7fe60f66bc1eee002e6b25f4b8358953184ed7bdceb69ca35d37af467,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761990708178896364,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-wzgp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c770fa7-174c-43ab-ac63-635b19152843,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ccff636c81da778123aaba73ca1c6a96114c3d9b455724fc184ea7051b61a16,PodSandboxId:ae1c1b106a1ce6fe7752079dd99dd3da08ea5c8417f73c7d2db66281343dd8bc,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761990706331554116,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-p2brt,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 7d9684ff-4d35-4cab-b655-c3fcbbfaa552,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{
\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf58580295883b8e6038335be978078035e0481b1653d560046de613e93dbc8f,PodSandboxId:0451201d3cea63a5045de49d8239fa668d5e94aaf5ad3e940d5550071ebea6cb,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1761990692227112170,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-cffvx,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.po
d.uid: bbe62e6a-a91a-428f-bbd4-b93bf597277d,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5d4912957560b7536a6c330e413b78d8074dab0b202ba22a5bc327a0cf5f8a2,PodSandboxId:8aac4234df2d12e07c37fb39a1595bd340e7adc1fe2162b211b453851a56a63d,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761990685537208680,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: e328fd3e-a381-414d-ba99-1aa6f7f40585,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:323c0222f1b7214ab99931e32355894f2f03f8261792abe4a4d2bb34fcd2969f,PodSandboxId:1c7e949564af5bc80420dc3808d3f2087aa2f9b293627ed59b78902667c1bcef,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761990655935157577,Labels:map[
string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-lr4lw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee1e3ae-5d43-4b43-a348-0e04ec066093,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6de230bb7ebf7771aad2c97275ddf43f297877d5aa072670a0a1ea8eb9c2d60f,PodSandboxId:4fbf69bbad2cf19e93c7344344fcc06babe9936500aa5bef352fd41fd55b694f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761990655486179158,Labels:map[st
ring]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c394064-33ff-4fd0-a4bc-afb948952ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a27cff89c3381e7c393bcaa8835fcfc8181cd61e7b6ab7d332528c6747943387,PodSandboxId:d7fa84c405309fb1e772e6c659810175defff8a22e42a89197e6b5a5597a8c84,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761990646997219064,Labels:map[string]string{io.kubernete
s.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vsbrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a65dae-82f4-4f33-b460-fa45a39b3342,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:260edbddb00ef801ed1131f918ebc64902a7e77ccf06a4ed7c432254423d7b66,PodSandboxId:089a55380f09729b05eee5a252927b0c79db01dc718d6007a08b5689f2ce71c5,Metadata:&ContainerMetadata{Name:
kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761990646303679370,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7fck9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a834adcc-b0ec-4cad-8944-bea90a627787,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86586375e770d45d0d93bbb47a93539a88b9dc7cdd8db120d1c9301cf9724986,PodSandboxId:47c204cffec810f2b063e0da736cf9f9a808714639f57abfa3a16da3187f96a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:
&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761990633442334233,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f64ac66b49c7412b8fa37d2ea6025670,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:195a44f107dbd39942c739e79c57d0cb5365ba4acc9d6617ae02ff7fabb66667,PodSandboxId:0780152663a4bf99a793fec09
c7dd2ddf6dc4673b89381ad0a9d3bb4248095e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761990633398671407,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c80f54e7a2ffeed9d816c83a1643dee4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a6a05d5c3b322ab0daa8e0142efe
db8b2cd9709809a366e3b02c33252f097e2,PodSandboxId:4303a653e0e77a28ad08502f1313df5bfebd24a17c8e4816f84db5f2d930a571,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761990633395979421,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ff8e16ad24795a1ca532e7aa16809a1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c9ad62c824f0689e272e9d02d572351e59f9325259ac81e62d0597c48762a5,PodSandboxId:25028e524345d4f110f0887066fc1114742e907055b01a9fcf2cb85f6e770b0b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761990633414977195,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 390b611a3c7c50f2133aad0ea70b2107,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.conta
iner.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0fc06500-8ea7-4852-a5f7-3d53b76806b9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:55:56 addons-086339 crio[826]: time="2025-11-01 09:55:56.567957040Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d35e9e7a-a704-4d8c-91ee-910697c6194a name=/runtime.v1.RuntimeService/Version
	Nov 01 09:55:56 addons-086339 crio[826]: time="2025-11-01 09:55:56.568237187Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d35e9e7a-a704-4d8c-91ee-910697c6194a name=/runtime.v1.RuntimeService/Version
	Nov 01 09:55:56 addons-086339 crio[826]: time="2025-11-01 09:55:56.569507667Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3a92ec17-e49b-44a0-8632-200024c0dc5f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:55:56 addons-086339 crio[826]: time="2025-11-01 09:55:56.570683432Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761990956570662944,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:511388,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3a92ec17-e49b-44a0-8632-200024c0dc5f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:55:56 addons-086339 crio[826]: time="2025-11-01 09:55:56.571365443Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a4ce802e-8438-4e4e-957c-2a1fc122b6c2 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:55:56 addons-086339 crio[826]: time="2025-11-01 09:55:56.571441299Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a4ce802e-8438-4e4e-957c-2a1fc122b6c2 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:55:56 addons-086339 crio[826]: time="2025-11-01 09:55:56.572023033Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d8f9ab035f10b883f89c331d67218f109856992b9b069efdae0a16a908bf656d,PodSandboxId:ecbb6e0269dbe5206ee40e41cf202e8a0f1fc8985220bca67dd2abcee664753f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761990753121450389,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bd0f0b90-ebd1-434e-86db-7717f59bb0b2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54e24bc9ad0bcf2346a54dba46c112594d3456f8e0851e42ae540839ab98ade7,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1761990734122207031,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28195b893a436d57c90ecac8b5fe73e5c1511f1415dc342be8113625a0b8d79a,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1761990729102734319,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60f64f1e1264248ca86d3e8ea17c90635c9d479311fe8d5ea622b661f0068bd6,PodSandboxId:b2e63f129e7cad5f03427260dc3589db4cecd4b45329bdb1e1023738a84b3985,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761990727410551183,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-g7dks,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a4165ee4-5d09-49d4-a0c1-f663b2084a0d,},Annotations:map[string]
string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:68f7cffce3b8170ddcfc4c830b594fbca731822c5a5c3c0fac39926a07fb45b5,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1761990719918712512,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be3e64d3efaef7385f69804094ceefebb9c929b31f339e7185c4ea6397ea2ccd,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1761990718492685647,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49ddab9d49bd9e1c4a3cbea6a7d518a89881a1bd73e967f274d22a36f1dfea5f,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1761990716814923539,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ada47aa6071a7356f4a8da278d3fc6aca1557ba5c9a0099793d41309eba1008,PodSandboxId:943b7b78175f24034b62c4341008ce9ed69d78515991694c43fd13b4f6ac1fc1,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1761990715391394480,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2c565f0-80a3-4b2d-a99b-edc1d7ae4fe2,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb32e4824fbd208045e8bad8dfedff1f23e927c32f1b62fae12233696df589bb,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name
:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1761990713966475127,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4b410307ca2339c52c3d12763e6b754600ea116c26c1df56bd5b04a1a68661d,PodSa
ndboxId:48e637e86e4493f303489a52457e2b59ba63b33cc608f38bb21f8e651a9e1571,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761990712169152493,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dw6sn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 51ccb987-b8f5-42f1-af70-9d22dd5ca2ec,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:441d236fd7e39d9b3a5cc6b3bb8961ce35e
c981238120611cdcc3cb61d7735b1,PodSandboxId:ed25ff910660155f553fa5ca020216deb811acfe62895c250c2f4116f4e42adf,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1761990712051266696,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50e03a30-f2e9-4ec1-ba85-6da2654030c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bde07abd1
e0966a5ed194175cec37ddb2ab38d4771b5729d05571ed5072606a8,PodSandboxId:28091fc92ecf572bc5fbe283ff9cafe33ce0a67c0edcc4cbdfa901ef366642c5,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761990709762301697,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-4kwxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e301a0c5-17dc-43be-9fd5-c14b76c1b92c,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:764b375ef3791c087e6ad4b248126f0c7c98e6065f6bd3c282044dcc212ac1f4,PodSandboxId:1c83f726dda755d3ed283799c973eeabdf1da173f6f6ce420a3d047efb307a42,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761990709662174283,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-d7qkm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0111b6b5-409d-4b18-a391-db0a0fbe7882,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a29c1fc2c879d0e534a8cebc61baf83da09a1b3e98b2972f576c64cacf35d44,PodSandboxId:cd9903f7fe60f66bc1eee002e6b25f4b8358953184ed7bdceb69ca35d37af467,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761990708178896364,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-wzgp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c770fa7-174c-43ab-ac63-635b19152843,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ccff636c81da778123aaba73ca1c6a96114c3d9b455724fc184ea7051b61a16,PodSandboxId:ae1c1b106a1ce6fe7752079dd99dd3da08ea5c8417f73c7d2db66281343dd8bc,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761990706331554116,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-p2brt,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 7d9684ff-4d35-4cab-b655-c3fcbbfaa552,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{
\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf58580295883b8e6038335be978078035e0481b1653d560046de613e93dbc8f,PodSandboxId:0451201d3cea63a5045de49d8239fa668d5e94aaf5ad3e940d5550071ebea6cb,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1761990692227112170,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-cffvx,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.po
d.uid: bbe62e6a-a91a-428f-bbd4-b93bf597277d,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5d4912957560b7536a6c330e413b78d8074dab0b202ba22a5bc327a0cf5f8a2,PodSandboxId:8aac4234df2d12e07c37fb39a1595bd340e7adc1fe2162b211b453851a56a63d,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761990685537208680,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: e328fd3e-a381-414d-ba99-1aa6f7f40585,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:323c0222f1b7214ab99931e32355894f2f03f8261792abe4a4d2bb34fcd2969f,PodSandboxId:1c7e949564af5bc80420dc3808d3f2087aa2f9b293627ed59b78902667c1bcef,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761990655935157577,Labels:map[
string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-lr4lw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee1e3ae-5d43-4b43-a348-0e04ec066093,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6de230bb7ebf7771aad2c97275ddf43f297877d5aa072670a0a1ea8eb9c2d60f,PodSandboxId:4fbf69bbad2cf19e93c7344344fcc06babe9936500aa5bef352fd41fd55b694f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761990655486179158,Labels:map[st
ring]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c394064-33ff-4fd0-a4bc-afb948952ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a27cff89c3381e7c393bcaa8835fcfc8181cd61e7b6ab7d332528c6747943387,PodSandboxId:d7fa84c405309fb1e772e6c659810175defff8a22e42a89197e6b5a5597a8c84,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761990646997219064,Labels:map[string]string{io.kubernete
s.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vsbrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a65dae-82f4-4f33-b460-fa45a39b3342,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:260edbddb00ef801ed1131f918ebc64902a7e77ccf06a4ed7c432254423d7b66,PodSandboxId:089a55380f09729b05eee5a252927b0c79db01dc718d6007a08b5689f2ce71c5,Metadata:&ContainerMetadata{Name:
kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761990646303679370,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7fck9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a834adcc-b0ec-4cad-8944-bea90a627787,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86586375e770d45d0d93bbb47a93539a88b9dc7cdd8db120d1c9301cf9724986,PodSandboxId:47c204cffec810f2b063e0da736cf9f9a808714639f57abfa3a16da3187f96a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:
&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761990633442334233,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f64ac66b49c7412b8fa37d2ea6025670,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:195a44f107dbd39942c739e79c57d0cb5365ba4acc9d6617ae02ff7fabb66667,PodSandboxId:0780152663a4bf99a793fec09
c7dd2ddf6dc4673b89381ad0a9d3bb4248095e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761990633398671407,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c80f54e7a2ffeed9d816c83a1643dee4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a6a05d5c3b322ab0daa8e0142efe
db8b2cd9709809a366e3b02c33252f097e2,PodSandboxId:4303a653e0e77a28ad08502f1313df5bfebd24a17c8e4816f84db5f2d930a571,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761990633395979421,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ff8e16ad24795a1ca532e7aa16809a1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c9ad62c824f0689e272e9d02d572351e59f9325259ac81e62d0597c48762a5,PodSandboxId:25028e524345d4f110f0887066fc1114742e907055b01a9fcf2cb85f6e770b0b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761990633414977195,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 390b611a3c7c50f2133aad0ea70b2107,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.conta
iner.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a4ce802e-8438-4e4e-957c-2a1fc122b6c2 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:55:56 addons-086339 crio[826]: time="2025-11-01 09:55:56.609510740Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8f0154b5-8792-4587-a483-f93b55b85bdb name=/runtime.v1.RuntimeService/Version
	Nov 01 09:55:56 addons-086339 crio[826]: time="2025-11-01 09:55:56.609777454Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8f0154b5-8792-4587-a483-f93b55b85bdb name=/runtime.v1.RuntimeService/Version
	Nov 01 09:55:56 addons-086339 crio[826]: time="2025-11-01 09:55:56.611161335Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=928d1cce-f13e-4300-99a5-c986926391a0 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:55:56 addons-086339 crio[826]: time="2025-11-01 09:55:56.612337561Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761990956612313761,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:511388,},InodesUsed:&UInt64Value{Value:186,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=928d1cce-f13e-4300-99a5-c986926391a0 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:55:56 addons-086339 crio[826]: time="2025-11-01 09:55:56.613009146Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=16c0de6e-d6ab-4ab6-97da-bc5b59b9166b name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:55:56 addons-086339 crio[826]: time="2025-11-01 09:55:56.613085623Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=16c0de6e-d6ab-4ab6-97da-bc5b59b9166b name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:55:56 addons-086339 crio[826]: time="2025-11-01 09:55:56.613897440Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d8f9ab035f10b883f89c331d67218f109856992b9b069efdae0a16a908bf656d,PodSandboxId:ecbb6e0269dbe5206ee40e41cf202e8a0f1fc8985220bca67dd2abcee664753f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761990753121450389,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bd0f0b90-ebd1-434e-86db-7717f59bb0b2,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54e24bc9ad0bcf2346a54dba46c112594d3456f8e0851e42ae540839ab98ade7,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1761990734122207031,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: 9a80f5e9,io.kubernetes.container.restart
Count: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28195b893a436d57c90ecac8b5fe73e5c1511f1415dc342be8113625a0b8d79a,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1761990729102734319,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: 743e
34f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60f64f1e1264248ca86d3e8ea17c90635c9d479311fe8d5ea622b661f0068bd6,PodSandboxId:b2e63f129e7cad5f03427260dc3589db4cecd4b45329bdb1e1023738a84b3985,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761990727410551183,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-g7dks,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: a4165ee4-5d09-49d4-a0c1-f663b2084a0d,},Annotations:map[string]
string{io.kubernetes.container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:68f7cffce3b8170ddcfc4c830b594fbca731822c5a5c3c0fac39926a07fb45b5,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1761990719918712512,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: 62375f0d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be3e64d3efaef7385f69804094ceefebb9c929b31f339e7185c4ea6397ea2ccd,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpeci
fiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1761990718492685647,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: 70cab6f4,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49ddab9d49bd9e1c4a3cbea6a7d518a89881a1bd73e967f274d22a36f1dfea5f,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-st
orage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1761990716814923539,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: 880c5a9e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ada47aa6071a7356f4a8da278d3fc6aca1557ba5c9a0099793d41309eba1008,PodSandboxId:943b7b78175f24034b62c4341008ce9ed69d78515991694c43fd13b4f6ac1fc1,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0
,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1761990715391394480,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2c565f0-80a3-4b2d-a99b-edc1d7ae4fe2,},Annotations:map[string]string{io.kubernetes.container.hash: 204ff79e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb32e4824fbd208045e8bad8dfedff1f23e927c32f1b62fae12233696df589bb,PodSandboxId:47bd89a6a83f44b6eb9380f5fc54d7fc011a810ca8589cc6ff65fdc4d34213d8,Metadata:&ContainerMetadata{Name
:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1761990713966475127,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-z7vjp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96e87cd6-068d-40af-9966-b875b9a7629e,},Annotations:map[string]string{io.kubernetes.container.hash: db43d78f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4b410307ca2339c52c3d12763e6b754600ea116c26c1df56bd5b04a1a68661d,PodSa
ndboxId:48e637e86e4493f303489a52457e2b59ba63b33cc608f38bb21f8e651a9e1571,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761990712169152493,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dw6sn,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 51ccb987-b8f5-42f1-af70-9d22dd5ca2ec,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:441d236fd7e39d9b3a5cc6b3bb8961ce35e
c981238120611cdcc3cb61d7735b1,PodSandboxId:ed25ff910660155f553fa5ca020216deb811acfe62895c250c2f4116f4e42adf,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1761990712051266696,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50e03a30-f2e9-4ec1-ba85-6da2654030c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d14b655,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bde07abd1
e0966a5ed194175cec37ddb2ab38d4771b5729d05571ed5072606a8,PodSandboxId:28091fc92ecf572bc5fbe283ff9cafe33ce0a67c0edcc4cbdfa901ef366642c5,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761990709762301697,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-4kwxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e301a0c5-17dc-43be-9fd5-c14b76c1b92c,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:764b375ef3791c087e6ad4b248126f0c7c98e6065f6bd3c282044dcc212ac1f4,PodSandboxId:1c83f726dda755d3ed283799c973eeabdf1da173f6f6ce420a3d047efb307a42,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761990709662174283,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-d7qkm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0111b6b5-409d-4b18-a391-db0a0fbe7882,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.c
ontainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a29c1fc2c879d0e534a8cebc61baf83da09a1b3e98b2972f576c64cacf35d44,PodSandboxId:cd9903f7fe60f66bc1eee002e6b25f4b8358953184ed7bdceb69ca35d37af467,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1761990708178896364,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-7d9fbc56b8-wzgp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c770fa7-174c-43ab-ac63-635b19152843,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ccff636c81da778123aaba73ca1c6a96114c3d9b455724fc184ea7051b61a16,PodSandboxId:ae1c1b106a1ce6fe7752079dd99dd3da08ea5c8417f73c7d2db66281343dd8bc,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761990706331554116,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-p2brt,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 7d9684ff-4d35-4cab-b655-c3fcbbfaa552,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{
\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf58580295883b8e6038335be978078035e0481b1653d560046de613e93dbc8f,PodSandboxId:0451201d3cea63a5045de49d8239fa668d5e94aaf5ad3e940d5550071ebea6cb,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1761990692227112170,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-648f6765c9-cffvx,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.po
d.uid: bbe62e6a-a91a-428f-bbd4-b93bf597277d,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e5d4912957560b7536a6c330e413b78d8074dab0b202ba22a5bc327a0cf5f8a2,PodSandboxId:8aac4234df2d12e07c37fb39a1595bd340e7adc1fe2162b211b453851a56a63d,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761990685537208680,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.p
od.namespace: kube-system,io.kubernetes.pod.uid: e328fd3e-a381-414d-ba99-1aa6f7f40585,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:323c0222f1b7214ab99931e32355894f2f03f8261792abe4a4d2bb34fcd2969f,PodSandboxId:1c7e949564af5bc80420dc3808d3f2087aa2f9b293627ed59b78902667c1bcef,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761990655935157577,Labels:map[
string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-lr4lw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee1e3ae-5d43-4b43-a348-0e04ec066093,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6de230bb7ebf7771aad2c97275ddf43f297877d5aa072670a0a1ea8eb9c2d60f,PodSandboxId:4fbf69bbad2cf19e93c7344344fcc06babe9936500aa5bef352fd41fd55b694f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761990655486179158,Labels:map[st
ring]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c394064-33ff-4fd0-a4bc-afb948952ac6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a27cff89c3381e7c393bcaa8835fcfc8181cd61e7b6ab7d332528c6747943387,PodSandboxId:d7fa84c405309fb1e772e6c659810175defff8a22e42a89197e6b5a5597a8c84,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761990646997219064,Labels:map[string]string{io.kubernete
s.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-vsbrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a65dae-82f4-4f33-b460-fa45a39b3342,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:260edbddb00ef801ed1131f918ebc64902a7e77ccf06a4ed7c432254423d7b66,PodSandboxId:089a55380f09729b05eee5a252927b0c79db01dc718d6007a08b5689f2ce71c5,Metadata:&ContainerMetadata{Name:
kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761990646303679370,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7fck9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a834adcc-b0ec-4cad-8944-bea90a627787,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86586375e770d45d0d93bbb47a93539a88b9dc7cdd8db120d1c9301cf9724986,PodSandboxId:47c204cffec810f2b063e0da736cf9f9a808714639f57abfa3a16da3187f96a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:
&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761990633442334233,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f64ac66b49c7412b8fa37d2ea6025670,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:195a44f107dbd39942c739e79c57d0cb5365ba4acc9d6617ae02ff7fabb66667,PodSandboxId:0780152663a4bf99a793fec09
c7dd2ddf6dc4673b89381ad0a9d3bb4248095e3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761990633398671407,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c80f54e7a2ffeed9d816c83a1643dee4,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a6a05d5c3b322ab0daa8e0142efe
db8b2cd9709809a366e3b02c33252f097e2,PodSandboxId:4303a653e0e77a28ad08502f1313df5bfebd24a17c8e4816f84db5f2d930a571,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761990633395979421,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ff8e16ad24795a1ca532e7aa16809a1,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.
container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1c9ad62c824f0689e272e9d02d572351e59f9325259ac81e62d0597c48762a5,PodSandboxId:25028e524345d4f110f0887066fc1114742e907055b01a9fcf2cb85f6e770b0b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761990633414977195,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-086339,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 390b611a3c7c50f2133aad0ea70b2107,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.conta
iner.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=16c0de6e-d6ab-4ab6-97da-bc5b59b9166b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	d8f9ab035f10b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          3 minutes ago       Running             busybox                                  0                   ecbb6e0269dbe       busybox
	54e24bc9ad0bc       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          3 minutes ago       Running             csi-snapshotter                          0                   47bd89a6a83f4       csi-hostpathplugin-z7vjp
	28195b893a436       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          3 minutes ago       Running             csi-provisioner                          0                   47bd89a6a83f4       csi-hostpathplugin-z7vjp
	60f64f1e12642       registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd                             3 minutes ago       Running             controller                               0                   b2e63f129e7ca       ingress-nginx-controller-675c5ddd98-g7dks
	68f7cffce3b81       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            3 minutes ago       Running             liveness-probe                           0                   47bd89a6a83f4       csi-hostpathplugin-z7vjp
	be3e64d3efaef       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           3 minutes ago       Running             hostpath                                 0                   47bd89a6a83f4       csi-hostpathplugin-z7vjp
	49ddab9d49bd9       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                3 minutes ago       Running             node-driver-registrar                    0                   47bd89a6a83f4       csi-hostpathplugin-z7vjp
	9ada47aa6071a       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              4 minutes ago       Running             csi-resizer                              0                   943b7b78175f2       csi-hostpath-resizer-0
	bb32e4824fbd2       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   4 minutes ago       Running             csi-external-health-monitor-controller   0                   47bd89a6a83f4       csi-hostpathplugin-z7vjp
	a4b410307ca23       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39                   4 minutes ago       Exited              patch                                    0                   48e637e86e449       ingress-nginx-admission-patch-dw6sn
	441d236fd7e39       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             4 minutes ago       Running             csi-attacher                             0                   ed25ff9106601       csi-hostpath-attacher-0
	bde07abd1e096       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      4 minutes ago       Running             volume-snapshot-controller               0                   28091fc92ecf5       snapshot-controller-7d9fbc56b8-4kwxj
	764b375ef3791       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39                   4 minutes ago       Exited              create                                   0                   1c83f726dda75       ingress-nginx-admission-create-d7qkm
	9a29c1fc2c879       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      4 minutes ago       Running             volume-snapshot-controller               0                   cd9903f7fe60f       snapshot-controller-7d9fbc56b8-wzgp7
	6ccff636c81da       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            4 minutes ago       Running             gadget                                   0                   ae1c1b106a1ce       gadget-p2brt
	bf58580295883       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             4 minutes ago       Running             local-path-provisioner                   0                   0451201d3cea6       local-path-provisioner-648f6765c9-cffvx
	e5d4912957560       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               4 minutes ago       Running             minikube-ingress-dns                     0                   8aac4234df2d1       kube-ingress-dns-minikube
	323c0222f1b72       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     5 minutes ago       Running             amd-gpu-device-plugin                    0                   1c7e949564af5       amd-gpu-device-plugin-lr4lw
	6de230bb7ebf7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             5 minutes ago       Running             storage-provisioner                      0                   4fbf69bbad2cf       storage-provisioner
	a27cff89c3381       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             5 minutes ago       Running             coredns                                  0                   d7fa84c405309       coredns-66bc5c9577-vsbrs
	260edbddb00ef       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             5 minutes ago       Running             kube-proxy                               0                   089a55380f097       kube-proxy-7fck9
	86586375e770d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             5 minutes ago       Running             kube-scheduler                           0                   47c204cffec81       kube-scheduler-addons-086339
	e1c9ad62c824f       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             5 minutes ago       Running             kube-apiserver                           0                   25028e524345d       kube-apiserver-addons-086339
	195a44f107dbd       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             5 minutes ago       Running             etcd                                     0                   0780152663a4b       etcd-addons-086339
	9a6a05d5c3b32       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             5 minutes ago       Running             kube-controller-manager                  0                   4303a653e0e77       kube-controller-manager-addons-086339
	
	
	==> coredns [a27cff89c3381e7c393bcaa8835fcfc8181cd61e7b6ab7d332528c6747943387] <==
	[INFO] 10.244.0.8:46984 - 64533 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000141653s
	[INFO] 10.244.0.8:46984 - 26572 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000122796s
	[INFO] 10.244.0.8:46984 - 13929 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000122328s
	[INFO] 10.244.0.8:46984 - 50125 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000111517s
	[INFO] 10.244.0.8:46984 - 28460 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000076823s
	[INFO] 10.244.0.8:46984 - 37293 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000357436s
	[INFO] 10.244.0.8:46984 - 35576 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000074841s
	[INFO] 10.244.0.8:47197 - 56588 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000121682s
	[INFO] 10.244.0.8:47197 - 56863 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000074546s
	[INFO] 10.244.0.8:55042 - 52218 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00018264s
	[INFO] 10.244.0.8:55042 - 52511 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000079606s
	[INFO] 10.244.0.8:46708 - 46443 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000066375s
	[INFO] 10.244.0.8:46708 - 46765 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000066983s
	[INFO] 10.244.0.8:59900 - 32652 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000207279s
	[INFO] 10.244.0.8:59900 - 32872 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000078309s
	[INFO] 10.244.0.23:50316 - 52228 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001915683s
	[INFO] 10.244.0.23:47612 - 63606 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.002354882s
	[INFO] 10.244.0.23:53727 - 34179 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000138277s
	[INFO] 10.244.0.23:43312 - 5456 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000125706s
	[INFO] 10.244.0.23:34742 - 50233 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000105505s
	[INFO] 10.244.0.23:42706 - 32458 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000148964s
	[INFO] 10.244.0.23:47433 - 16041 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00404755s
	[INFO] 10.244.0.23:43796 - 36348 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.003930977s
	[INFO] 10.244.0.28:59610 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000657818s
	[INFO] 10.244.0.28:58478 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000385159s
	
	
	==> describe nodes <==
	Name:               addons-086339
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-086339
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=addons-086339
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_50_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-086339
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-086339"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:50:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-086339
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:55:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:54:15 +0000   Sat, 01 Nov 2025 09:50:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:54:15 +0000   Sat, 01 Nov 2025 09:50:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:54:15 +0000   Sat, 01 Nov 2025 09:50:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:54:15 +0000   Sat, 01 Nov 2025 09:50:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.58
	  Hostname:    addons-086339
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 a0be334a213a4e9abad36168cb6c4d93
	  System UUID:                a0be334a-213a-4e9a-bad3-6168cb6c4d93
	  Boot ID:                    f5f61220-a436-4e42-9f0c-21fc51d403ab
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m27s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m45s
	  default                     task-pv-pod                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m41s
	  default                     test-local-path                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	  gadget                      gadget-p2brt                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m3s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-g7dks    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         5m2s
	  kube-system                 amd-gpu-device-plugin-lr4lw                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m8s
	  kube-system                 coredns-66bc5c9577-vsbrs                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m11s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kube-system                 csi-hostpathplugin-z7vjp                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kube-system                 etcd-addons-086339                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m16s
	  kube-system                 kube-apiserver-addons-086339                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m16s
	  kube-system                 kube-controller-manager-addons-086339        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m17s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 kube-proxy-7fck9                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m12s
	  kube-system                 kube-scheduler-addons-086339                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m16s
	  kube-system                 snapshot-controller-7d9fbc56b8-4kwxj         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  kube-system                 snapshot-controller-7d9fbc56b8-wzgp7         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m3s
	  local-path-storage          local-path-provisioner-648f6765c9-cffvx      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m9s   kube-proxy       
	  Normal  Starting                 5m17s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m16s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m16s  kubelet          Node addons-086339 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m16s  kubelet          Node addons-086339 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m16s  kubelet          Node addons-086339 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m16s  kubelet          Node addons-086339 status is now: NodeReady
	  Normal  RegisteredNode           5m12s  node-controller  Node addons-086339 event: Registered Node addons-086339 in Controller
	
	
	==> dmesg <==
	[  +0.136702] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.026933] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.422693] kauditd_printk_skb: 282 callbacks suppressed
	[  +0.000178] kauditd_printk_skb: 179 callbacks suppressed
	[Nov 1 09:51] kauditd_printk_skb: 480 callbacks suppressed
	[ +10.588247] kauditd_printk_skb: 85 callbacks suppressed
	[  +8.893680] kauditd_printk_skb: 32 callbacks suppressed
	[  +4.164899] kauditd_printk_skb: 11 callbacks suppressed
	[ +11.079506] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.550370] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.067618] kauditd_printk_skb: 131 callbacks suppressed
	[  +2.164833] kauditd_printk_skb: 126 callbacks suppressed
	[Nov 1 09:52] kauditd_printk_skb: 130 callbacks suppressed
	[  +6.663248] kauditd_printk_skb: 68 callbacks suppressed
	[  +6.258025] kauditd_printk_skb: 26 callbacks suppressed
	[  +0.000041] kauditd_printk_skb: 2 callbacks suppressed
	[ +13.077918] kauditd_printk_skb: 41 callbacks suppressed
	[  +0.000038] kauditd_printk_skb: 22 callbacks suppressed
	[  +0.048376] kauditd_printk_skb: 98 callbacks suppressed
	[  +0.000043] kauditd_printk_skb: 78 callbacks suppressed
	[Nov 1 09:53] kauditd_printk_skb: 58 callbacks suppressed
	[  +4.089930] kauditd_printk_skb: 42 callbacks suppressed
	[ +31.556122] kauditd_printk_skb: 74 callbacks suppressed
	[Nov 1 09:54] kauditd_printk_skb: 80 callbacks suppressed
	[ +15.872282] kauditd_printk_skb: 22 callbacks suppressed
	
	
	==> etcd [195a44f107dbd39942c739e79c57d0cb5365ba4acc9d6617ae02ff7fabb66667] <==
	{"level":"warn","ts":"2025-11-01T09:51:53.828267Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"276.604194ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:51:53.828291Z","caller":"traceutil/trace.go:172","msg":"trace[1920018772] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1053; }","duration":"276.642708ms","start":"2025-11-01T09:51:53.551641Z","end":"2025-11-01T09:51:53.828284Z","steps":["trace[1920018772] 'agreement among raft nodes before linearized reading'  (duration: 276.575926ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:51:53.828365Z","caller":"traceutil/trace.go:172","msg":"trace[1601158234] transaction","detail":"{read_only:false; response_revision:1054; number_of_response:1; }","duration":"307.834445ms","start":"2025-11-01T09:51:53.520519Z","end":"2025-11-01T09:51:53.828354Z","steps":["trace[1601158234] 'process raft request'  (duration: 307.722523ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:51:53.829077Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T09:51:53.520485Z","time spent":"307.914654ms","remote":"127.0.0.1:50442","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4224,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/jobs/ingress-nginx/ingress-nginx-admission-patch\" mod_revision:715 > success:<request_put:<key:\"/registry/jobs/ingress-nginx/ingress-nginx-admission-patch\" value_size:4158 >> failure:<request_range:<key:\"/registry/jobs/ingress-nginx/ingress-nginx-admission-patch\" > >"}
	{"level":"warn","ts":"2025-11-01T09:51:53.837101Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"205.85086ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:51:53.837158Z","caller":"traceutil/trace.go:172","msg":"trace[1726047932] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1054; }","duration":"205.918617ms","start":"2025-11-01T09:51:53.631230Z","end":"2025-11-01T09:51:53.837149Z","steps":["trace[1726047932] 'agreement among raft nodes before linearized reading'  (duration: 205.832252ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:51:53.837332Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"160.114488ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:51:53.837352Z","caller":"traceutil/trace.go:172","msg":"trace[1767754287] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1054; }","duration":"160.137708ms","start":"2025-11-01T09:51:53.677208Z","end":"2025-11-01T09:51:53.837346Z","steps":["trace[1767754287] 'agreement among raft nodes before linearized reading'  (duration: 160.097095ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:51:53.837427Z","caller":"traceutil/trace.go:172","msg":"trace[169582400] transaction","detail":"{read_only:false; response_revision:1055; number_of_response:1; }","duration":"313.012286ms","start":"2025-11-01T09:51:53.524403Z","end":"2025-11-01T09:51:53.837415Z","steps":["trace[169582400] 'process raft request'  (duration: 312.936714ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:51:53.837521Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T09:51:53.524385Z","time spent":"313.094727ms","remote":"127.0.0.1:50348","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4615,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-dw6sn\" mod_revision:1047 > success:<request_put:<key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-dw6sn\" value_size:4543 >> failure:<request_range:<key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-dw6sn\" > >"}
	{"level":"warn","ts":"2025-11-01T09:51:53.837540Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"187.263588ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:51:53.837560Z","caller":"traceutil/trace.go:172","msg":"trace[1222634] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1055; }","duration":"187.33ms","start":"2025-11-01T09:51:53.650224Z","end":"2025-11-01T09:51:53.837554Z","steps":["trace[1222634] 'agreement among raft nodes before linearized reading'  (duration: 187.245695ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:51:57.997674Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"105.945423ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:51:57.998286Z","caller":"traceutil/trace.go:172","msg":"trace[902941296] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1089; }","duration":"106.560193ms","start":"2025-11-01T09:51:57.891708Z","end":"2025-11-01T09:51:57.998268Z","steps":["trace[902941296] 'range keys from in-memory index tree'  (duration: 105.862666ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:52:04.319796Z","caller":"traceutil/trace.go:172","msg":"trace[427956117] transaction","detail":"{read_only:false; response_revision:1125; number_of_response:1; }","duration":"140.175418ms","start":"2025-11-01T09:52:04.179583Z","end":"2025-11-01T09:52:04.319759Z","steps":["trace[427956117] 'process raft request'  (duration: 140.063245ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:52:08.551381Z","caller":"traceutil/trace.go:172","msg":"trace[603420838] transaction","detail":"{read_only:false; response_revision:1143; number_of_response:1; }","duration":"197.437726ms","start":"2025-11-01T09:52:08.353928Z","end":"2025-11-01T09:52:08.551366Z","steps":["trace[603420838] 'process raft request'  (duration: 197.339599ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:52:12.328289Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"151.65917ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:52:12.328359Z","caller":"traceutil/trace.go:172","msg":"trace[1819451364] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1156; }","duration":"151.738106ms","start":"2025-11-01T09:52:12.176611Z","end":"2025-11-01T09:52:12.328349Z","steps":["trace[1819451364] 'range keys from in-memory index tree'  (duration: 151.603213ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:52:19.593365Z","caller":"traceutil/trace.go:172","msg":"trace[1734006161] transaction","detail":"{read_only:false; response_revision:1195; number_of_response:1; }","duration":"230.197039ms","start":"2025-11-01T09:52:19.363155Z","end":"2025-11-01T09:52:19.593352Z","steps":["trace[1734006161] 'process raft request'  (duration: 230.054763ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:53:03.073159Z","caller":"traceutil/trace.go:172","msg":"trace[844100605] linearizableReadLoop","detail":"{readStateIndex:1471; appliedIndex:1471; }","duration":"184.287063ms","start":"2025-11-01T09:53:02.888805Z","end":"2025-11-01T09:53:03.073092Z","steps":["trace[844100605] 'read index received'  (duration: 184.274805ms)","trace[844100605] 'applied index is now lower than readState.Index'  (duration: 11.185µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T09:53:03.073336Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"184.514416ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:53:03.073356Z","caller":"traceutil/trace.go:172","msg":"trace[379602539] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1424; }","duration":"184.548883ms","start":"2025-11-01T09:53:02.888802Z","end":"2025-11-01T09:53:03.073351Z","steps":["trace[379602539] 'agreement among raft nodes before linearized reading'  (duration: 184.47499ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:53:03.073440Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"173.732425ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2025-11-01T09:53:03.073464Z","caller":"traceutil/trace.go:172","msg":"trace[1841159583] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1424; }","duration":"173.762443ms","start":"2025-11-01T09:53:02.899696Z","end":"2025-11-01T09:53:03.073458Z","steps":["trace[1841159583] 'agreement among raft nodes before linearized reading'  (duration: 173.676648ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:53:03.073212Z","caller":"traceutil/trace.go:172","msg":"trace[990398784] transaction","detail":"{read_only:false; response_revision:1424; number_of_response:1; }","duration":"298.156963ms","start":"2025-11-01T09:53:02.775044Z","end":"2025-11-01T09:53:03.073201Z","steps":["trace[990398784] 'process raft request'  (duration: 298.073448ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:55:56 up 5 min,  0 users,  load average: 0.64, 1.17, 0.65
	Linux addons-086339 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Oct 28 16:58:05 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [e1c9ad62c824f0689e272e9d02d572351e59f9325259ac81e62d0597c48762a5] <==
	I1101 09:50:55.476490       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1101 09:50:55.994199       1 alloc.go:328] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.109.112.13"}
	I1101 09:50:56.035477       1 controller.go:667] quota admission added evaluator for: statefulsets.apps
	I1101 09:50:56.324487       1 alloc.go:328] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.96.83.31"}
	W1101 09:50:57.099010       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1101 09:50:57.124752       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1101 09:50:58.973540       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.103.194.15"}
	W1101 09:51:14.142521       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1101 09:51:14.159779       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1101 09:51:14.218764       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1101 09:51:14.228684       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1101 09:51:45.524276       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 09:51:45.524485       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.150.255:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.150.255:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.150.255:443: connect: connection refused" logger="UnhandledError"
	E1101 09:51:45.525272       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1101 09:51:45.526596       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.150.255:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.150.255:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.150.255:443: connect: connection refused" logger="UnhandledError"
	E1101 09:51:45.531959       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.150.255:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.150.255:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.150.255:443: connect: connection refused" logger="UnhandledError"
	I1101 09:51:45.647009       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1101 09:52:39.519537       1 conn.go:339] Error on socket receive: read tcp 192.168.39.58:8443->192.168.39.1:41530: use of closed network connection
	I1101 09:52:48.989373       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.98.57.119"}
	I1101 09:53:11.180343       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1101 09:53:11.353371       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.7.153"}
	I1101 09:53:46.542354       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [9a6a05d5c3b322ab0daa8e0142efedb8b2cd9709809a366e3b02c33252f097e2] <==
	I1101 09:50:44.141750       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 09:50:44.157085       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:50:44.158318       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 09:50:44.158339       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:50:44.158349       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 09:50:44.158354       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 09:50:44.159955       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 09:50:44.160156       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 09:50:44.160245       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 09:50:44.160320       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 09:50:44.160408       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 09:50:44.161689       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 09:50:44.167370       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	E1101 09:51:14.127299       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 09:51:14.127454       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1101 09:51:14.127514       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1101 09:51:14.185809       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1101 09:51:14.198574       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1101 09:51:14.229296       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:51:14.299732       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1101 09:51:44.237259       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 09:51:44.318152       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1101 09:52:52.843343       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gcp-auth"
	I1101 09:53:15.811359       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I1101 09:54:13.498480       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	
	
	==> kube-proxy [260edbddb00ef801ed1131f918ebc64902a7e77ccf06a4ed7c432254423d7b66] <==
	I1101 09:50:47.380388       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:50:47.481009       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:50:47.481962       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.58"]
	E1101 09:50:47.483258       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:50:47.618974       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1101 09:50:47.619028       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 09:50:47.619055       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:50:47.646432       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:50:47.648118       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:50:47.648153       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:50:47.664129       1 config.go:309] "Starting node config controller"
	I1101 09:50:47.666955       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:50:47.666969       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:50:47.665033       1 config.go:200] "Starting service config controller"
	I1101 09:50:47.666978       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:50:47.667949       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:50:47.667987       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:50:47.668010       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:50:47.668021       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:50:47.767136       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:50:47.771739       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 09:50:47.772010       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [86586375e770d45d0d93bbb47a93539a88b9dc7cdd8db120d1c9301cf9724986] <==
	E1101 09:50:37.221936       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:50:37.222056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:50:37.222116       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:50:37.222130       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:50:37.225229       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:50:37.225317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:50:37.225378       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:50:37.227418       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:50:37.227443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:50:37.227647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 09:50:37.227768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 09:50:37.227996       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:50:38.054220       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:50:38.064603       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:50:38.082458       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:50:38.180400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:50:38.210958       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:50:38.220410       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:50:38.222634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:50:38.324209       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:50:38.347306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:50:38.391541       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:50:38.445129       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:50:38.559973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1101 09:50:41.263288       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:54:50 addons-086339 kubelet[1515]: E1101 09:54:50.268428    1515 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761990890268029990  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
	Nov 01 09:54:50 addons-086339 kubelet[1515]: E1101 09:54:50.268463    1515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761990890268029990  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
	Nov 01 09:54:50 addons-086339 kubelet[1515]: E1101 09:54:50.305187    1515 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Nov 01 09:54:50 addons-086339 kubelet[1515]: E1101 09:54:50.305229    1515 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Nov 01 09:54:50 addons-086339 kubelet[1515]: E1101 09:54:50.305397    1515 kuberuntime_manager.go:1449] "Unhandled Error" err="container task-pv-container start failed in pod task-pv-pod_default(eb0ec6cf-d05a-4514-92a8-21a6ef18f433): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 01 09:54:50 addons-086339 kubelet[1515]: E1101 09:54:50.305429    1515 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="eb0ec6cf-d05a-4514-92a8-21a6ef18f433"
	Nov 01 09:54:50 addons-086339 kubelet[1515]: E1101 09:54:50.561927    1515 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="eb0ec6cf-d05a-4514-92a8-21a6ef18f433"
	Nov 01 09:55:00 addons-086339 kubelet[1515]: E1101 09:55:00.273508    1515 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761990900271552646  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
	Nov 01 09:55:00 addons-086339 kubelet[1515]: E1101 09:55:00.273534    1515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761990900271552646  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
	Nov 01 09:55:10 addons-086339 kubelet[1515]: E1101 09:55:10.276942    1515 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761990910276315839  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
	Nov 01 09:55:10 addons-086339 kubelet[1515]: E1101 09:55:10.276971    1515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761990910276315839  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
	Nov 01 09:55:14 addons-086339 kubelet[1515]: I1101 09:55:14.027120    1515 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 09:55:20 addons-086339 kubelet[1515]: E1101 09:55:20.279990    1515 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761990920279533624  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
	Nov 01 09:55:20 addons-086339 kubelet[1515]: E1101 09:55:20.280034    1515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761990920279533624  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
	Nov 01 09:55:21 addons-086339 kubelet[1515]: E1101 09:55:21.714159    1515 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="busybox:stable"
	Nov 01 09:55:21 addons-086339 kubelet[1515]: E1101 09:55:21.714211    1515 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="busybox:stable"
	Nov 01 09:55:21 addons-086339 kubelet[1515]: E1101 09:55:21.715191    1515 kuberuntime_manager.go:1449] "Unhandled Error" err="container busybox start failed in pod test-local-path_default(bb9a245d-f766-4ca6-8de9-96b056a9cab4): ErrImagePull: reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 01 09:55:21 addons-086339 kubelet[1515]: E1101 09:55:21.715236    1515 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ErrImagePull: \"reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="bb9a245d-f766-4ca6-8de9-96b056a9cab4"
	Nov 01 09:55:30 addons-086339 kubelet[1515]: E1101 09:55:30.284869    1515 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761990930283903826  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
	Nov 01 09:55:30 addons-086339 kubelet[1515]: E1101 09:55:30.285227    1515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761990930283903826  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
	Nov 01 09:55:37 addons-086339 kubelet[1515]: E1101 09:55:37.029091    1515 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"busybox:stable\\\": ErrImagePull: reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/test-local-path" podUID="bb9a245d-f766-4ca6-8de9-96b056a9cab4"
	Nov 01 09:55:40 addons-086339 kubelet[1515]: E1101 09:55:40.289878    1515 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761990940289416664  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
	Nov 01 09:55:40 addons-086339 kubelet[1515]: E1101 09:55:40.289922    1515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761990940289416664  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
	Nov 01 09:55:50 addons-086339 kubelet[1515]: E1101 09:55:50.293194    1515 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761990950292642282  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
	Nov 01 09:55:50 addons-086339 kubelet[1515]: E1101 09:55:50.293241    1515 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761990950292642282  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:511388}  inodes_used:{value:186}}"
	
	
	==> storage-provisioner [6de230bb7ebf7771aad2c97275ddf43f297877d5aa072670a0a1ea8eb9c2d60f] <==
	W1101 09:55:31.893328       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:33.896270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:33.904616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:35.907948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:35.916412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:37.921172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:37.929237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:39.933121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:39.937747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:41.941572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:41.949290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:43.953081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:43.958542       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:45.962384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:45.967990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:47.971729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:47.976929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:49.980949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:49.989538       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:51.992645       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:52.000154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:54.004213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:54.010812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:56.014462       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:55:56.022778       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-086339 -n addons-086339
helpers_test.go:269: (dbg) Run:  kubectl --context addons-086339 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-d7qkm ingress-nginx-admission-patch-dw6sn
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/LocalPath]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-086339 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-d7qkm ingress-nginx-admission-patch-dw6sn
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-086339 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-d7qkm ingress-nginx-admission-patch-dw6sn: exit status 1 (90.343563ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-086339/192.168.39.58
	Start Time:       Sat, 01 Nov 2025 09:53:11 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.29
	IPs:
	  IP:  10.244.0.29
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sggwf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-sggwf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  2m46s                default-scheduler  Successfully assigned default/nginx to addons-086339
	  Warning  Failed     99s                  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     99s                  kubelet            Error: ErrImagePull
	  Normal   BackOff    98s                  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     98s                  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    86s (x2 over 2m46s)  kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-086339/192.168.39.58
	Start Time:       Sat, 01 Nov 2025 09:53:15 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.30
	IPs:
	  IP:  10.244.0.30
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x27kl (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-x27kl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  2m42s                default-scheduler  Successfully assigned default/task-pv-pod to addons-086339
	  Warning  Failed     67s                  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     67s                  kubelet            Error: ErrImagePull
	  Normal   BackOff    67s                  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     67s                  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    56s (x2 over 2m41s)  kubelet            Pulling image "docker.io/nginx"
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-086339/192.168.39.58
	Start Time:       Sat, 01 Nov 2025 09:52:55 +0000
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         busybox:stable
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t5c9x (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-t5c9x:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m2s                 default-scheduler  Successfully assigned default/test-local-path to addons-086339
	  Warning  Failed     2m13s                kubelet            Failed to pull image "busybox:stable": fetching target platform image selected from image index: reading manifest sha256:00baf5736376036ea4bc1a1c075784fc98a79186604d5d41305cd9b428b3b737 in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     36s (x2 over 2m13s)  kubelet            Error: ErrImagePull
	  Warning  Failed     36s                  kubelet            Failed to pull image "busybox:stable": reading manifest stable in docker.io/library/busybox: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    20s (x2 over 2m13s)  kubelet            Back-off pulling image "busybox:stable"
	  Warning  Failed     20s (x2 over 2m13s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    7s (x3 over 2m59s)   kubelet            Pulling image "busybox:stable"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-d7qkm" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-dw6sn" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-086339 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-d7qkm ingress-nginx-admission-patch-dw6sn: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-086339 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-086339 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.938473054s)
--- FAIL: TestAddons/parallel/LocalPath (232.83s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (302.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-950389 --alsologtostderr -v=1]
E1101 10:17:29.153777   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-950389 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-950389 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-950389 --alsologtostderr -v=1] stderr:
I1101 10:14:11.228743   82661 out.go:360] Setting OutFile to fd 1 ...
I1101 10:14:11.228983   82661 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:14:11.228992   82661 out.go:374] Setting ErrFile to fd 2...
I1101 10:14:11.228996   82661 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:14:11.229201   82661 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-70113/.minikube/bin
I1101 10:14:11.229440   82661 mustload.go:66] Loading cluster: functional-950389
I1101 10:14:11.229775   82661 config.go:182] Loaded profile config "functional-950389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 10:14:11.231582   82661 host.go:66] Checking if "functional-950389" exists ...
I1101 10:14:11.231771   82661 api_server.go:166] Checking apiserver status ...
I1101 10:14:11.231827   82661 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1101 10:14:11.234052   82661 main.go:143] libmachine: domain functional-950389 has defined MAC address 52:54:00:b9:b8:2f in network mk-functional-950389
I1101 10:14:11.234383   82661 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:b8:2f", ip: ""} in network mk-functional-950389: {Iface:virbr1 ExpiryTime:2025-11-01 11:03:40 +0000 UTC Type:0 Mac:52:54:00:b9:b8:2f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:functional-950389 Clientid:01:52:54:00:b9:b8:2f}
I1101 10:14:11.234407   82661 main.go:143] libmachine: domain functional-950389 has defined IP address 192.168.39.40 and MAC address 52:54:00:b9:b8:2f in network mk-functional-950389
I1101 10:14:11.234527   82661 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/functional-950389/id_rsa Username:docker}
I1101 10:14:11.332105   82661 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/14596/cgroup
W1101 10:14:11.344315   82661 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/14596/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1101 10:14:11.344376   82661 ssh_runner.go:195] Run: ls
I1101 10:14:11.350142   82661 api_server.go:253] Checking apiserver healthz at https://192.168.39.40:8441/healthz ...
I1101 10:14:11.354996   82661 api_server.go:279] https://192.168.39.40:8441/healthz returned 200:
ok
W1101 10:14:11.355050   82661 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1101 10:14:11.355195   82661 config.go:182] Loaded profile config "functional-950389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 10:14:11.355205   82661 addons.go:70] Setting dashboard=true in profile "functional-950389"
I1101 10:14:11.355217   82661 addons.go:239] Setting addon dashboard=true in "functional-950389"
I1101 10:14:11.355238   82661 host.go:66] Checking if "functional-950389" exists ...
I1101 10:14:11.358718   82661 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1101 10:14:11.359952   82661 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I1101 10:14:11.361040   82661 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1101 10:14:11.361066   82661 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1101 10:14:11.363832   82661 main.go:143] libmachine: domain functional-950389 has defined MAC address 52:54:00:b9:b8:2f in network mk-functional-950389
I1101 10:14:11.364324   82661 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:b8:2f", ip: ""} in network mk-functional-950389: {Iface:virbr1 ExpiryTime:2025-11-01 11:03:40 +0000 UTC Type:0 Mac:52:54:00:b9:b8:2f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:functional-950389 Clientid:01:52:54:00:b9:b8:2f}
I1101 10:14:11.364350   82661 main.go:143] libmachine: domain functional-950389 has defined IP address 192.168.39.40 and MAC address 52:54:00:b9:b8:2f in network mk-functional-950389
I1101 10:14:11.364519   82661 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/functional-950389/id_rsa Username:docker}
I1101 10:14:11.465630   82661 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1101 10:14:11.465688   82661 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1101 10:14:11.489195   82661 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1101 10:14:11.489220   82661 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1101 10:14:11.511990   82661 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1101 10:14:11.512024   82661 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1101 10:14:11.533750   82661 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1101 10:14:11.533781   82661 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I1101 10:14:11.556198   82661 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
I1101 10:14:11.556230   82661 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1101 10:14:11.578871   82661 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1101 10:14:11.578905   82661 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1101 10:14:11.602411   82661 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1101 10:14:11.602441   82661 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1101 10:14:11.631247   82661 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1101 10:14:11.631279   82661 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1101 10:14:11.654140   82661 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1101 10:14:11.654168   82661 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1101 10:14:11.676248   82661 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1101 10:14:12.391134   82661 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-950389 addons enable metrics-server

                                                
                                                
I1101 10:14:12.392902   82661 addons.go:202] Writing out "functional-950389" config to set dashboard=true...
W1101 10:14:12.393225   82661 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1101 10:14:12.394134   82661 kapi.go:59] client config for functional-950389: &rest.Config{Host:"https://192.168.39.40:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21830-70113/.minikube/profiles/functional-950389/client.crt", KeyFile:"/home/jenkins/minikube-integration/21830-70113/.minikube/profiles/functional-950389/client.key", CAFile:"/home/jenkins/minikube-integration/21830-70113/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1101 10:14:12.394756   82661 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1101 10:14:12.394778   82661 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1101 10:14:12.394785   82661 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1101 10:14:12.394790   82661 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1101 10:14:12.394796   82661 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1101 10:14:12.415188   82661 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  da754266-1729-40b2-b45f-7b6d820c9be3 810 0 2025-11-01 10:14:12 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-11-01 10:14:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.97.234.201,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.97.234.201],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W1101 10:14:12.415361   82661 out.go:285] * Launching proxy ...
* Launching proxy ...
I1101 10:14:12.415444   82661 dashboard.go:154] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-950389 proxy --port 36195]
I1101 10:14:12.415938   82661 dashboard.go:159] Waiting for kubectl to output host:port ...
I1101 10:14:12.466664   82661 dashboard.go:177] proxy stdout: Starting to serve on 127.0.0.1:36195
W1101 10:14:12.466701   82661 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I1101 10:14:12.475471   82661 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5d645b1b-0f89-4775-a46b-1434bbc98f27] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:14:12 GMT]] Body:0xc0016326c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000292a00 TLS:<nil>}
I1101 10:14:12.475573   82661 retry.go:31] will retry after 99.262µs: Temporary Error: unexpected response code: 503
I1101 10:14:12.479512   82661 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5959926c-029b-4925-b908-cb65e6d95c07] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:14:12 GMT]] Body:0xc00159d9c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000455040 TLS:<nil>}
I1101 10:14:12.479583   82661 retry.go:31] will retry after 216.661µs: Temporary Error: unexpected response code: 503
I1101 10:14:12.483162   82661 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[30b3a07b-cffe-4911-b5b6-9c115c858efc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:14:12 GMT]] Body:0xc001632800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001ca8c0 TLS:<nil>}
I1101 10:14:12.483211   82661 retry.go:31] will retry after 136.511µs: Temporary Error: unexpected response code: 503
I1101 10:14:12.486979   82661 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f7fe6f3c-cf3a-46ae-a13f-67a55a9c8a2a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:14:12 GMT]] Body:0xc00159db00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000455400 TLS:<nil>}
I1101 10:14:12.487038   82661 retry.go:31] will retry after 288.305µs: Temporary Error: unexpected response code: 503
I1101 10:14:12.494080   82661 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9b098206-99ec-428e-8ef3-549825e13499] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:14:12 GMT]] Body:0xc0016328c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cadc0 TLS:<nil>}
I1101 10:14:12.494143   82661 retry.go:31] will retry after 572.395µs: Temporary Error: unexpected response code: 503
I1101 10:14:12.499137   82661 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b9a9a32d-f5ee-4b60-a305-6fb5b95ea9e1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:14:12 GMT]] Body:0xc00151cb40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000455540 TLS:<nil>}
I1101 10:14:12.499192   82661 retry.go:31] will retry after 686.406µs: Temporary Error: unexpected response code: 503
I1101 10:14:12.503601   82661 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[91fa7631-8e6d-4bfb-ada3-c872f9c52156] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:14:12 GMT]] Body:0xc0016329c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000292c80 TLS:<nil>}
I1101 10:14:12.503646   82661 retry.go:31] will retry after 1.053778ms: Temporary Error: unexpected response code: 503
I1101 10:14:12.507679   82661 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bf4401ee-9b85-42ee-9af1-7a74f353ffec] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:14:12 GMT]] Body:0xc001632a80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004557c0 TLS:<nil>}
I1101 10:14:12.507723   82661 retry.go:31] will retry after 2.156112ms: Temporary Error: unexpected response code: 503
I1101 10:14:12.514191   82661 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cbac56a8-142b-4652-b412-d1dc6e48c94e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:14:12 GMT]] Body:0xc00151cc40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000455900 TLS:<nil>}
I1101 10:14:12.514239   82661 retry.go:31] will retry after 3.111501ms: Temporary Error: unexpected response code: 503
I1101 10:14:12.520275   82661 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bde8d7ce-fa40-4fb3-b725-45bd54e827d4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:14:12 GMT]] Body:0xc001632b40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000292dc0 TLS:<nil>}
I1101 10:14:12.520331   82661 retry.go:31] will retry after 2.10796ms: Temporary Error: unexpected response code: 503
I1101 10:14:12.525999   82661 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ae384380-b5fc-411e-9cf5-6efc80560aef] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:14:12 GMT]] Body:0xc00159dc80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000455a40 TLS:<nil>}
I1101 10:14:12.526068   82661 retry.go:31] will retry after 5.385293ms: Temporary Error: unexpected response code: 503
I1101 10:14:12.534636   82661 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[437dc813-a5cf-4419-afa2-55f257ad57d0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:14:12 GMT]] Body:0xc001632c00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001caf00 TLS:<nil>}
I1101 10:14:12.534678   82661 retry.go:31] will retry after 12.955766ms: Temporary Error: unexpected response code: 503
I1101 10:14:12.550920   82661 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[31525391-ccda-471b-b319-79b076fb690d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:14:12 GMT]] Body:0xc00159dd80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000455b80 TLS:<nil>}
I1101 10:14:12.550985   82661 retry.go:31] will retry after 10.339213ms: Temporary Error: unexpected response code: 503
I1101 10:14:12.566205   82661 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ea12f050-2d0c-4cd9-9b79-725242bfb56e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:14:12 GMT]] Body:0xc00151cd80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cb040 TLS:<nil>}
I1101 10:14:12.566267   82661 retry.go:31] will retry after 17.788661ms: Temporary Error: unexpected response code: 503
I1101 10:14:12.587276   82661 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e6cab5d7-8935-45ab-a6af-13c6087d6c02] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:14:12 GMT]] Body:0xc001632d00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000292f00 TLS:<nil>}
I1101 10:14:12.587361   82661 retry.go:31] will retry after 37.987158ms: Temporary Error: unexpected response code: 503
I1101 10:14:12.631311   82661 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5c1337dc-3dbc-45d4-887e-7394b50ffd2b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:14:12 GMT]] Body:0xc001632dc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000455cc0 TLS:<nil>}
I1101 10:14:12.631386   82661 retry.go:31] will retry after 62.26439ms: Temporary Error: unexpected response code: 503
I1101 10:14:12.700993   82661 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6d73f960-4fd5-42a0-87f0-0dced56d5111] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:14:12 GMT]] Body:0xc00151ce40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000455e00 TLS:<nil>}
I1101 10:14:12.701140   82661 retry.go:31] will retry after 90.636574ms: Temporary Error: unexpected response code: 503
I1101 10:14:12.796714   82661 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4164d19f-0ca1-41ff-affe-d9386ce201e6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:14:12 GMT]] Body:0xc001632f00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000293040 TLS:<nil>}
I1101 10:14:12.796799   82661 retry.go:31] will retry after 110.782988ms: Temporary Error: unexpected response code: 503
I1101 10:14:12.920341   82661 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1103a0b8-c427-432f-bdd2-33c2635723f4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:14:12 GMT]] Body:0xc00151cf00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0000f8000 TLS:<nil>}
I1101 10:14:12.920411   82661 retry.go:31] will retry after 87.096893ms: Temporary Error: unexpected response code: 503
I1101 10:14:13.012300   82661 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[72befc91-241c-4c91-9a10-640880acf4d2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:14:13 GMT]] Body:0xc00159df00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000293180 TLS:<nil>}
I1101 10:14:13.012367   82661 retry.go:31] will retry after 112.127468ms: Temporary Error: unexpected response code: 503
I1101 10:14:13.128064   82661 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ea6b95b8-a1ca-47fc-86e6-040f914e1dba] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:14:13 GMT]] Body:0xc00151d000 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cb180 TLS:<nil>}
I1101 10:14:13.128129   82661 retry.go:31] will retry after 230.297044ms: Temporary Error: unexpected response code: 503
I1101 10:14:13.362640   82661 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ddddec38-dd59-4458-954e-4e61a81a312a] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:14:13 GMT]] Body:0xc001633040 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002932c0 TLS:<nil>}
I1101 10:14:13.362709   82661 retry.go:31] will retry after 294.098934ms: Temporary Error: unexpected response code: 503
I1101 10:14:13.660724   82661 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9a638b9d-61fb-43bd-9d43-357df1670bbc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:14:13 GMT]] Body:0xc001826080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0000f83c0 TLS:<nil>}
I1101 10:14:13.660797   82661 retry.go:31] will retry after 961.622059ms: Temporary Error: unexpected response code: 503
I1101 10:14:14.627093   82661 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a83553e8-4896-476d-9f7f-875e21dadb83] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:14:14 GMT]] Body:0xc00151d0c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cb400 TLS:<nil>}
I1101 10:14:14.627186   82661 retry.go:31] will retry after 702.600593ms: Temporary Error: unexpected response code: 503
I1101 10:14:15.333457   82661 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0d1697c7-ced1-4bff-bb23-d8f87a162111] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:14:15 GMT]] Body:0xc001826140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000293400 TLS:<nil>}
I1101 10:14:15.333556   82661 retry.go:31] will retry after 2.257911517s: Temporary Error: unexpected response code: 503
I1101 10:14:17.596690   82661 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[df73b978-5a7c-4ed7-87f1-2e92f67e62f1] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:14:17 GMT]] Body:0xc00151d200 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000293540 TLS:<nil>}
I1101 10:14:17.596753   82661 retry.go:31] will retry after 3.309998785s: Temporary Error: unexpected response code: 503
I1101 10:14:20.913813   82661 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9b66ff22-cb04-4355-8327-3d049b4c9148] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:14:20 GMT]] Body:0xc00151d2c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000293680 TLS:<nil>}
I1101 10:14:20.913888   82661 retry.go:31] will retry after 3.717497102s: Temporary Error: unexpected response code: 503
I1101 10:14:24.636349   82661 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[97ef064f-1268-4159-927a-54672ff59b3d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:14:24 GMT]] Body:0xc001826200 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0000f8500 TLS:<nil>}
I1101 10:14:24.636412   82661 retry.go:31] will retry after 8.159267326s: Temporary Error: unexpected response code: 503
I1101 10:14:32.799814   82661 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ff8d3f7f-8097-4ce8-a4f1-29779a0c0122] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:14:32 GMT]] Body:0xc001826280 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002937c0 TLS:<nil>}
I1101 10:14:32.799876   82661 retry.go:31] will retry after 4.76496994s: Temporary Error: unexpected response code: 503
I1101 10:14:37.568923   82661 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[98f1e78d-e582-4e0a-945c-b14853b22a41] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:14:37 GMT]] Body:0xc001633240 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cb540 TLS:<nil>}
I1101 10:14:37.568994   82661 retry.go:31] will retry after 17.728727363s: Temporary Error: unexpected response code: 503
I1101 10:14:55.304202   82661 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1bda5abc-690d-4dfd-8bbd-c2a431ba1faa] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:14:55 GMT]] Body:0xc0016332c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000293900 TLS:<nil>}
I1101 10:14:55.304263   82661 retry.go:31] will retry after 20.393506154s: Temporary Error: unexpected response code: 503
I1101 10:15:15.703722   82661 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f755d7d2-95f0-46c1-a830-c972a7085359] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:15:15 GMT]] Body:0xc00151d440 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cb680 TLS:<nil>}
I1101 10:15:15.703809   82661 retry.go:31] will retry after 39.189795681s: Temporary Error: unexpected response code: 503
I1101 10:15:54.897951   82661 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4227f66a-84f8-49e8-9441-fa7593f81e13] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:15:54 GMT]] Body:0xc00151d4c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001cb7c0 TLS:<nil>}
I1101 10:15:54.898020   82661 retry.go:31] will retry after 51.595816584s: Temporary Error: unexpected response code: 503
I1101 10:16:46.498295   82661 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2b0b781e-f9bc-49d1-86bb-14f1b0665663] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:16:46 GMT]] Body:0xc001632040 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000292000 TLS:<nil>}
I1101 10:16:46.498371   82661 retry.go:31] will retry after 1m2.640111549s: Temporary Error: unexpected response code: 503
I1101 10:17:49.144276   82661 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b65b53a7-7a2b-489e-9749-399f1272edc9] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:17:49 GMT]] Body:0xc00151c100 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0000f8640 TLS:<nil>}
I1101 10:17:49.144370   82661 retry.go:31] will retry after 1m9.246921016s: Temporary Error: unexpected response code: 503
I1101 10:18:58.395921   82661 dashboard.go:216] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ef517a88-388b-4262-9635-5ee8c820b4d0] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Sat, 01 Nov 2025 10:18:58 GMT]] Body:0xc0018260c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000292140 TLS:<nil>}
I1101 10:18:58.396029   82661 retry.go:31] will retry after 1m11.622181945s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-950389 -n functional-950389
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-950389 logs -n 25: (1.699410062s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount          │ -p functional-950389 /tmp/TestFunctionalparallelMountCmdspecific-port3461040035/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │                     │
	│ ssh            │ functional-950389 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │                     │
	│ ssh            │ functional-950389 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │ 01 Nov 25 10:14 UTC │
	│ ssh            │ functional-950389 ssh -- ls -la /mount-9p                                                                                         │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │ 01 Nov 25 10:14 UTC │
	│ ssh            │ functional-950389 ssh sudo umount -f /mount-9p                                                                                    │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │                     │
	│ mount          │ -p functional-950389 /tmp/TestFunctionalparallelMountCmdVerifyCleanup996814724/001:/mount1 --alsologtostderr -v=1                 │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │                     │
	│ mount          │ -p functional-950389 /tmp/TestFunctionalparallelMountCmdVerifyCleanup996814724/001:/mount2 --alsologtostderr -v=1                 │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │                     │
	│ mount          │ -p functional-950389 /tmp/TestFunctionalparallelMountCmdVerifyCleanup996814724/001:/mount3 --alsologtostderr -v=1                 │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │                     │
	│ ssh            │ functional-950389 ssh findmnt -T /mount1                                                                                          │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │                     │
	│ ssh            │ functional-950389 ssh findmnt -T /mount1                                                                                          │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │ 01 Nov 25 10:14 UTC │
	│ ssh            │ functional-950389 ssh findmnt -T /mount2                                                                                          │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │ 01 Nov 25 10:14 UTC │
	│ ssh            │ functional-950389 ssh findmnt -T /mount3                                                                                          │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │ 01 Nov 25 10:14 UTC │
	│ mount          │ -p functional-950389 --kill=true                                                                                                  │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-950389 --alsologtostderr -v=1                                                                    │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │                     │
	│ start          │ -p functional-950389 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio                           │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │                     │
	│ update-context │ functional-950389 update-context --alsologtostderr -v=2                                                                           │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │ 01 Nov 25 10:18 UTC │
	│ update-context │ functional-950389 update-context --alsologtostderr -v=2                                                                           │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │ 01 Nov 25 10:18 UTC │
	│ update-context │ functional-950389 update-context --alsologtostderr -v=2                                                                           │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │ 01 Nov 25 10:18 UTC │
	│ image          │ functional-950389 image ls --format short --alsologtostderr                                                                       │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │ 01 Nov 25 10:18 UTC │
	│ image          │ functional-950389 image ls --format yaml --alsologtostderr                                                                        │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │ 01 Nov 25 10:18 UTC │
	│ ssh            │ functional-950389 ssh pgrep buildkitd                                                                                             │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │                     │
	│ image          │ functional-950389 image build -t localhost/my-image:functional-950389 testdata/build --alsologtostderr                            │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │ 01 Nov 25 10:18 UTC │
	│ image          │ functional-950389 image ls                                                                                                        │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │ 01 Nov 25 10:18 UTC │
	│ image          │ functional-950389 image ls --format json --alsologtostderr                                                                        │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │ 01 Nov 25 10:18 UTC │
	│ image          │ functional-950389 image ls --format table --alsologtostderr                                                                       │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │ 01 Nov 25 10:18 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:18:11
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:18:11.338627   83952 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:18:11.338735   83952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:18:11.338746   83952 out.go:374] Setting ErrFile to fd 2...
	I1101 10:18:11.338753   83952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:18:11.339054   83952 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-70113/.minikube/bin
	I1101 10:18:11.339498   83952 out.go:368] Setting JSON to false
	I1101 10:18:11.340456   83952 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":7239,"bootTime":1761985052,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:18:11.340559   83952 start.go:143] virtualization: kvm guest
	I1101 10:18:11.342408   83952 out.go:179] * [functional-950389] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1101 10:18:11.343782   83952 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 10:18:11.343812   83952 notify.go:221] Checking for updates...
	I1101 10:18:11.346339   83952 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:18:11.347789   83952 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-70113/kubeconfig
	I1101 10:18:11.348964   83952 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-70113/.minikube
	I1101 10:18:11.350180   83952 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:18:11.351488   83952 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:18:11.353388   83952 config.go:182] Loaded profile config "functional-950389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:18:11.354047   83952 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:18:11.385249   83952 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1101 10:18:11.386557   83952 start.go:309] selected driver: kvm2
	I1101 10:18:11.386576   83952 start.go:930] validating driver "kvm2" against &{Name:functional-950389 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-950389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.40 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:18:11.386679   83952 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:18:11.388574   83952 out.go:203] 
	W1101 10:18:11.389749   83952 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1101 10:18:11.390833   83952 out.go:203] 
	
	
	==> CRI-O <==
	Nov 01 10:19:12 functional-950389 crio[14025]: time="2025-11-01 10:19:12.101596369Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9cb48085-2663-4f18-84f5-e5169dda2fca name=/runtime.v1.RuntimeService/Version
	Nov 01 10:19:12 functional-950389 crio[14025]: time="2025-11-01 10:19:12.103210386Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e5b5c4e7-1fe4-429c-96a6-c2ecbf50b78b name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:19:12 functional-950389 crio[14025]: time="2025-11-01 10:19:12.104190481Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761992352104165717,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:220510,},InodesUsed:&UInt64Value{Value:108,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e5b5c4e7-1fe4-429c-96a6-c2ecbf50b78b name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:19:12 functional-950389 crio[14025]: time="2025-11-01 10:19:12.105494584Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ba6ab4db-68a1-4c64-acbd-24da77e6e55a name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:19:12 functional-950389 crio[14025]: time="2025-11-01 10:19:12.105595782Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ba6ab4db-68a1-4c64-acbd-24da77e6e55a name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:19:12 functional-950389 crio[14025]: time="2025-11-01 10:19:12.106219067Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c2e6bc4021502c2f170d620789895f85434175172b836be9f091032da1f39fa8,PodSandboxId:cbcb2c689c2b5b1ca9aee13165b781ce089583dd6c1a35065fd259801c182c5a,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1761992045288650075,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60978975-8366-41aa-b97a-93a1c86afe6c,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5adf9564ca6356300d89661e1ffa64e5520291c9b1c46f0c7a2926733d15d16,PodSandboxId:6b974e48e60d25ac75bba8ab0213c66b917b8dbd93434957b6253c0360c22ff1,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1761991932929384265,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-5bb876957f-nnckx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dd6e3839-b3cf-4ec6-ad2c-a1c6d778799b,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"con
tainerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:774e7ac99a1544464667b47cbd5816a16cc6563f1fe95df87265068d45b7b5be,PodSandboxId:c72bccebda6e887772cc3ebb1c270fdd53869617cc542c8a94cc7aea17813bbf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761991895756692686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6ps9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a502e626-8a66-4687-9b76-053029dabdd6,},Annotations:map[string]string{io.kubernetes.container.hash:
e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c5bd32acfec70d052cccdfa7390d7b4719458133a155dea0720eaba86242817,PodSandboxId:1756d604cc85acc0c421877fbc05570e2e81d803392c957e624b52e6d6eff162,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5b
af0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761991895631754402,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5lfgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c9758ea-cd15-49e2-893c-e78ed7d30f55,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bea04e4d0bfb829facb73a10dc15c1004846f241b01c955bda0363f2503928b,PodSa
ndboxId:f63e3783b00886a154bf100a9ebb0b076d19678c123b9ffbc7dfc8f57e8c3606,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761991895284172691,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jtt6l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0c48c32-fe99-40bf-b651-b04105adec6b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:273ca0fdb8a91fa3e741bb6943c95a6dfb8153f908ad4f5a083bd2881a67b410,PodSandboxId:26487b7cf6878efb0c7f1e22a
323df3aa0e558454c7dda1e253950db7cf123de,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761991895219809116,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 437a47af-7662-481d-b1b7-09379f4069c9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:186f819959a1348c2af4e54006dff0c66c8117b0b9eb535946854568b1f2ca15,PodSandboxId:4761b5b9cfad0377a16669089b1cecc494620
1e1cea0b53be0485d7c076615b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761991891634539228,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832ac4e926fa9d3ad2ccc452d513f863,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58eb2
8533db8250617e51db9afe575300a277c682560c5c14a995c9b177e1dfc,PodSandboxId:88cad2727994a82f17a4c01e02294b6cf9ec60e5044d128765b3d40d437bb81c,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761991891641031515,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfd96429d7f7575fe65285b5903ca594,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePol
icy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5396a2c9c58870683b192a3fdf384b920eb701896ab62edc1c65efb081a3154,PodSandboxId:8a83bc2cd1d4b677af16c3abc1eaca1220ee5ff9436474a39929f997474b3120,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761991891584043691,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b43be368500d83ae97f6110abbf40e1,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e90b96d4e97286fd540c2d19181b1868c42d2dab4d68fa696bfc7108af3a7c77,PodSandboxId:139a554d695f05bb58bfd0e2ed05c5140d33cda05ed8689377af883672a72ae2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:8,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761991891565324561,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ef73b8d782106f4ce68a921abfa7e79,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [
{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2f46ea37f76ee460b92d3ba6f03808bc8727f53b3a42d9893fdf5c315adca62,PodSandboxId:68f5ead72a631362f93ee11bdee1d8769a1750005f9dc29ecaa9a9dda9915446,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761991853869284413,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 437a47af-7662-481d-b1b7-09379f
4069c9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d93163c734255a1ab1e2fb85b8e779b5d73e343d2f4346e8d26789c07b26e92,PodSandboxId:d9b89241d322c09b20ad49ff28f27f36d53287ba1b6d7a58950a03a0850382b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761991853344196655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5lfgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c9758ea-cd15-49e2-893c-e78ed7d30f55,},Annotations:map[string]
string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f54db03325aa1e112ef490697e276720ce6a9670cd42150d84459926faee346,PodSandboxId:6554bb887a4e46ae587ab3ce51abeef07f3943cca0a6a8c95ee7d822cf987118,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0
d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761991853156888745,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6ps9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a502e626-8a66-4687-9b76-053029dabdd6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62537ebd8ec8cfb85842acc4c972c6f4c2e
963731421d69f8dd4ef7d38a28f75,PodSandboxId:17b51286c59a8bc3eced4325f549d7a6976f82c95cf65ed363eff1520f95a85f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761991852423478827,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jtt6l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0c48c32-fe99-40bf-b651-b04105adec6b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f49d0ac915f874845d2f0bf26effc4af26f80c8bf498ef08d863d1ac072d8177,PodS
andboxId:2c85f617f556b93f76c665875b78b587e1dd47cee508740e24275ba1ea021bf9,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761991840821940954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfd96429d7f7575fe65285b5903ca594,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:35509da8a528e42be0f838e8fafb95aecf866da7c5e6b7a3463389d69257be53,PodSandboxId:1e98d8cc22231bc02a76b8b9c2987dbeb6c87d700ccceadb0052cc40b4b50f66,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:7,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761991840759609644,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ef73b8d782106f4ce68a921abfa7e79,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39af2ac349d69d2fd9b2dcd027c4c4e1bb63a93721f12f8f0dfc9945567c869c,PodSandboxId:a312f42dd95c1bbe80d4e02d4b83b3aebca6f86c6b1ee0e6490bd5c23dea3af7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761991840663345376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832ac4e926fa9d3ad2ccc452d513f863,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":102
59,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ba6ab4db-68a1-4c64-acbd-24da77e6e55a name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:19:12 functional-950389 crio[14025]: time="2025-11-01 10:19:12.133037549Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=04a48016-a5ba-4f41-bcf7-35df8ebdffbe name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 01 10:19:12 functional-950389 crio[14025]: time="2025-11-01 10:19:12.133515659Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:2206d0a0ca9a11a75117230cff6b51f399d76981fd11314cdd1348d0c31a35c4,Metadata:&PodSandboxMetadata{Name:dashboard-metrics-scraper-77bf4d6c4c-ljjhn,Uid:4eed9960-2472-42b7-bcd3-4ef596df7b49,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761992052668615633,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: dashboard-metrics-scraper-77bf4d6c4c-ljjhn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 4eed9960-2472-42b7-bcd3-4ef596df7b49,k8s-app: dashboard-metrics-scraper,pod-template-hash: 77bf4d6c4c,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T10:14:12.342953312Z,kubernetes.io/config.source: api,seccomp.security.alpha.kubernetes.io/pod: runtime/default,},RuntimeHandler:,},&PodSandbox{Id:9f9e977cc4c5f75e55fcba1ec55d08a8a992f8fac7fd21a99f6
a7a820448b973,Metadata:&PodSandboxMetadata{Name:kubernetes-dashboard-855c9754f9-wm424,Uid:6078d1e3-19c1-4501-9499-752f92e11376,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761992052608000501,Labels:map[string]string{gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kubernetes-dashboard-855c9754f9-wm424,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 6078d1e3-19c1-4501-9499-752f92e11376,k8s-app: kubernetes-dashboard,pod-template-hash: 855c9754f9,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T10:14:12.283497153Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cbcb2c689c2b5b1ca9aee13165b781ce089583dd6c1a35065fd259801c182c5a,Metadata:&PodSandboxMetadata{Name:busybox-mount,Uid:60978975-8366-41aa-b97a-93a1c86afe6c,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1761991944824425359,Labels:map[string]string{integration-test: busybox-mount,io.kubernetes.container.name: POD,
io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60978975-8366-41aa-b97a-93a1c86afe6c,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T10:12:24.500677330Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f3e0030d38b65031d2143668ce214339e12fae726517239a313a4e6dc86ea1bc,Metadata:&PodSandboxMetadata{Name:hello-node-75c85bcc94-wws2s,Uid:d32fd7e0-500b-4734-88ed-9a2fdbad7f04,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761991936293594494,Labels:map[string]string{app: hello-node,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-node-75c85bcc94-wws2s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d32fd7e0-500b-4734-88ed-9a2fdbad7f04,pod-template-hash: 75c85bcc94,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T10:12:15.971381795Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:97d84c97619508a942b50015d715d4c91d3a72452e3b25ac696ed985311b40ba,Metada
ta:&PodSandboxMetadata{Name:sp-pod,Uid:9629694c-f849-48d0-8099-8989879acb4b,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761991932734540590,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9629694c-f849-48d0-8099-8989879acb4b,test: storage-provisioner,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"test\":\"storage-provisioner\"},\"name\":\"sp-pod\",\"namespace\":\"default\"},\"spec\":{\"containers\":[{\"image\":\"docker.io/nginx\",\"name\":\"myfrontend\",\"volumeMounts\":[{\"mountPath\":\"/tmp/mount\",\"name\":\"mypd\"}]}],\"volumes\":[{\"name\":\"mypd\",\"persistentVolumeClaim\":{\"claimName\":\"myclaim\"}}]}}\n,kubernetes.io/config.seen: 2025-11-01T10:12:08.612711445Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b6dbf761f3c666c59882dd0a3b39b7cf0a2fa8
e99c3659fbfe9f86f997d537b5,Metadata:&PodSandboxMetadata{Name:hello-node-connect-7d85dfc575-t7gtf,Uid:e6aa5eba-2bc1-4f18-9d27-1e0bc284884d,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761991922437978400,Labels:map[string]string{app: hello-node-connect,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-t7gtf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e6aa5eba-2bc1-4f18-9d27-1e0bc284884d,pod-template-hash: 7d85dfc575,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T10:12:02.051312145Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6b974e48e60d25ac75bba8ab0213c66b917b8dbd93434957b6253c0360c22ff1,Metadata:&PodSandboxMetadata{Name:mysql-5bb876957f-nnckx,Uid:dd6e3839-b3cf-4ec6-ad2c-a1c6d778799b,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761991920248185293,Labels:map[string]string{app: mysql,io.kubernetes.container.name: POD,io.kubernetes.pod.name: mysql-5bb876957f-nnckx,io.kubernetes.pod.
namespace: default,io.kubernetes.pod.uid: dd6e3839-b3cf-4ec6-ad2c-a1c6d778799b,pod-template-hash: 5bb876957f,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T10:11:59.918429191Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c72bccebda6e887772cc3ebb1c270fdd53869617cc542c8a94cc7aea17813bbf,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-6ps9z,Uid:a502e626-8a66-4687-9b76-053029dabdd6,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1761991895141922043,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-6ps9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a502e626-8a66-4687-9b76-053029dabdd6,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T10:11:34.660051334Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f63e3783b00886a154bf100a9ebb0b076d19678c123b9ffbc7dfc8f57e8c3606,Metadata:&PodSandboxMetadata
{Name:kube-proxy-jtt6l,Uid:a0c48c32-fe99-40bf-b651-b04105adec6b,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1761991895012845129,Labels:map[string]string{controller-revision-hash: 66486579fc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-jtt6l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0c48c32-fe99-40bf-b651-b04105adec6b,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T10:11:34.660101452Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:26487b7cf6878efb0c7f1e22a323df3aa0e558454c7dda1e253950db7cf123de,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:437a47af-7662-481d-b1b7-09379f4069c9,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1761991895008437551,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 437a47af-7662-481d-b1b7-09379f4069c9,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-11-01T10:11:34.660106144Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1756d604cc85acc0c421877fbc05570e2e81d803392c957e624b52e6d6
eff162,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-5lfgh,Uid:7c9758ea-cd15-49e2-893c-e78ed7d30f55,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1761991895007610768,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-5lfgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c9758ea-cd15-49e2-893c-e78ed7d30f55,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T10:11:34.660107415Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4761b5b9cfad0377a16669089b1cecc4946201e1cea0b53be0485d7c076615b9,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-950389,Uid:832ac4e926fa9d3ad2ccc452d513f863,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1761991891381843249,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-950389,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 832ac4e926fa9d3ad2ccc452d513f863,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 832ac4e926fa9d3ad2ccc452d513f863,kubernetes.io/config.seen: 2025-11-01T10:11:30.664667748Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:139a554d695f05bb58bfd0e2ed05c5140d33cda05ed8689377af883672a72ae2,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-950389,Uid:5ef73b8d782106f4ce68a921abfa7e79,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1761991891360194547,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ef73b8d782106f4ce68a921abfa7e79,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5ef73b8d782106f4ce68a921abfa7e79,kubernetes.io/config.seen: 2025-11-01T10:11:30.664666909Z,kubernetes.io/config.s
ource: file,},RuntimeHandler:,},&PodSandbox{Id:88cad2727994a82f17a4c01e02294b6cf9ec60e5044d128765b3d40d437bb81c,Metadata:&PodSandboxMetadata{Name:etcd-functional-950389,Uid:cfd96429d7f7575fe65285b5903ca594,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1761991891359655352,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfd96429d7f7575fe65285b5903ca594,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.40:2379,kubernetes.io/config.hash: cfd96429d7f7575fe65285b5903ca594,kubernetes.io/config.seen: 2025-11-01T10:11:30.664662286Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8a83bc2cd1d4b677af16c3abc1eaca1220ee5ff9436474a39929f997474b3120,Metadata:&PodSandboxMetadata{Name:kube-apiserver-functional-950389,Uid:9b43be368500d83ae97f6110abbf40e1,Namespace:kube-system,Attempt:
0,},State:SANDBOX_READY,CreatedAt:1761991891357669917,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b43be368500d83ae97f6110abbf40e1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.40:8441,kubernetes.io/config.hash: 9b43be368500d83ae97f6110abbf40e1,kubernetes.io/config.seen: 2025-11-01T10:11:30.664665782Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:68f5ead72a631362f93ee11bdee1d8769a1750005f9dc29ecaa9a9dda9915446,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:437a47af-7662-481d-b1b7-09379f4069c9,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1761991853770631626,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.nam
e: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 437a47af-7662-481d-b1b7-09379f4069c9,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-11-01T10:10:53.450871709Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d9b89241d322c09b20ad49ff28f27f
36d53287ba1b6d7a58950a03a0850382b5,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-5lfgh,Uid:7c9758ea-cd15-49e2-893c-e78ed7d30f55,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1761991852586184309,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-5lfgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c9758ea-cd15-49e2-893c-e78ed7d30f55,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T10:10:52.194020887Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6554bb887a4e46ae587ab3ce51abeef07f3943cca0a6a8c95ee7d822cf987118,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-6ps9z,Uid:a502e626-8a66-4687-9b76-053029dabdd6,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1761991852476793471,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-6ps9z,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: a502e626-8a66-4687-9b76-053029dabdd6,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T10:10:52.143661391Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:17b51286c59a8bc3eced4325f549d7a6976f82c95cf65ed363eff1520f95a85f,Metadata:&PodSandboxMetadata{Name:kube-proxy-jtt6l,Uid:a0c48c32-fe99-40bf-b651-b04105adec6b,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1761991852171101167,Labels:map[string]string{controller-revision-hash: 66486579fc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-jtt6l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0c48c32-fe99-40bf-b651-b04105adec6b,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T10:10:51.834586044Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1e98d8cc22231bc02a76b8b9c2987dbeb6c87d700ccceadb0052cc40b
4b50f66,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-950389,Uid:5ef73b8d782106f4ce68a921abfa7e79,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1761991840534595372,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ef73b8d782106f4ce68a921abfa7e79,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5ef73b8d782106f4ce68a921abfa7e79,kubernetes.io/config.seen: 2025-11-01T10:10:40.044391953Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2c85f617f556b93f76c665875b78b587e1dd47cee508740e24275ba1ea021bf9,Metadata:&PodSandboxMetadata{Name:etcd-functional-950389,Uid:cfd96429d7f7575fe65285b5903ca594,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1761991840517701569,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD
,io.kubernetes.pod.name: etcd-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfd96429d7f7575fe65285b5903ca594,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.40:2379,kubernetes.io/config.hash: cfd96429d7f7575fe65285b5903ca594,kubernetes.io/config.seen: 2025-11-01T10:10:40.044389457Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a312f42dd95c1bbe80d4e02d4b83b3aebca6f86c6b1ee0e6490bd5c23dea3af7,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-950389,Uid:832ac4e926fa9d3ad2ccc452d513f863,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1761991840482777196,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832ac4e926fa9d3ad2ccc452d513f863,tier: control-plane,},Annotations:map[string]string{kubernetes.io/co
nfig.hash: 832ac4e926fa9d3ad2ccc452d513f863,kubernetes.io/config.seen: 2025-11-01T10:10:40.044386409Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=04a48016-a5ba-4f41-bcf7-35df8ebdffbe name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 01 10:19:12 functional-950389 crio[14025]: time="2025-11-01 10:19:12.135577855Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8f94a42d-634d-4795-b32e-dcdd404267f3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:19:12 functional-950389 crio[14025]: time="2025-11-01 10:19:12.135638210Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8f94a42d-634d-4795-b32e-dcdd404267f3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:19:12 functional-950389 crio[14025]: time="2025-11-01 10:19:12.136243887Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c2e6bc4021502c2f170d620789895f85434175172b836be9f091032da1f39fa8,PodSandboxId:cbcb2c689c2b5b1ca9aee13165b781ce089583dd6c1a35065fd259801c182c5a,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1761992045288650075,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60978975-8366-41aa-b97a-93a1c86afe6c,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5adf9564ca6356300d89661e1ffa64e5520291c9b1c46f0c7a2926733d15d16,PodSandboxId:6b974e48e60d25ac75bba8ab0213c66b917b8dbd93434957b6253c0360c22ff1,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1761991932929384265,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-5bb876957f-nnckx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dd6e3839-b3cf-4ec6-ad2c-a1c6d778799b,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"con
tainerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:774e7ac99a1544464667b47cbd5816a16cc6563f1fe95df87265068d45b7b5be,PodSandboxId:c72bccebda6e887772cc3ebb1c270fdd53869617cc542c8a94cc7aea17813bbf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761991895756692686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6ps9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a502e626-8a66-4687-9b76-053029dabdd6,},Annotations:map[string]string{io.kubernetes.container.hash:
e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c5bd32acfec70d052cccdfa7390d7b4719458133a155dea0720eaba86242817,PodSandboxId:1756d604cc85acc0c421877fbc05570e2e81d803392c957e624b52e6d6eff162,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5b
af0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761991895631754402,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5lfgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c9758ea-cd15-49e2-893c-e78ed7d30f55,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bea04e4d0bfb829facb73a10dc15c1004846f241b01c955bda0363f2503928b,PodSa
ndboxId:f63e3783b00886a154bf100a9ebb0b076d19678c123b9ffbc7dfc8f57e8c3606,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761991895284172691,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jtt6l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0c48c32-fe99-40bf-b651-b04105adec6b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:273ca0fdb8a91fa3e741bb6943c95a6dfb8153f908ad4f5a083bd2881a67b410,PodSandboxId:26487b7cf6878efb0c7f1e22a
323df3aa0e558454c7dda1e253950db7cf123de,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761991895219809116,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 437a47af-7662-481d-b1b7-09379f4069c9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:186f819959a1348c2af4e54006dff0c66c8117b0b9eb535946854568b1f2ca15,PodSandboxId:4761b5b9cfad0377a16669089b1cecc494620
1e1cea0b53be0485d7c076615b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761991891634539228,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832ac4e926fa9d3ad2ccc452d513f863,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58eb2
8533db8250617e51db9afe575300a277c682560c5c14a995c9b177e1dfc,PodSandboxId:88cad2727994a82f17a4c01e02294b6cf9ec60e5044d128765b3d40d437bb81c,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761991891641031515,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfd96429d7f7575fe65285b5903ca594,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePol
icy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5396a2c9c58870683b192a3fdf384b920eb701896ab62edc1c65efb081a3154,PodSandboxId:8a83bc2cd1d4b677af16c3abc1eaca1220ee5ff9436474a39929f997474b3120,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761991891584043691,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b43be368500d83ae97f6110abbf40e1,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e90b96d4e97286fd540c2d19181b1868c42d2dab4d68fa696bfc7108af3a7c77,PodSandboxId:139a554d695f05bb58bfd0e2ed05c5140d33cda05ed8689377af883672a72ae2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:8,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761991891565324561,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ef73b8d782106f4ce68a921abfa7e79,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [
{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2f46ea37f76ee460b92d3ba6f03808bc8727f53b3a42d9893fdf5c315adca62,PodSandboxId:68f5ead72a631362f93ee11bdee1d8769a1750005f9dc29ecaa9a9dda9915446,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761991853869284413,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 437a47af-7662-481d-b1b7-09379f
4069c9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d93163c734255a1ab1e2fb85b8e779b5d73e343d2f4346e8d26789c07b26e92,PodSandboxId:d9b89241d322c09b20ad49ff28f27f36d53287ba1b6d7a58950a03a0850382b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761991853344196655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5lfgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c9758ea-cd15-49e2-893c-e78ed7d30f55,},Annotations:map[string]
string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f54db03325aa1e112ef490697e276720ce6a9670cd42150d84459926faee346,PodSandboxId:6554bb887a4e46ae587ab3ce51abeef07f3943cca0a6a8c95ee7d822cf987118,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0
d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761991853156888745,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6ps9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a502e626-8a66-4687-9b76-053029dabdd6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62537ebd8ec8cfb85842acc4c972c6f4c2e
963731421d69f8dd4ef7d38a28f75,PodSandboxId:17b51286c59a8bc3eced4325f549d7a6976f82c95cf65ed363eff1520f95a85f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761991852423478827,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jtt6l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0c48c32-fe99-40bf-b651-b04105adec6b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f49d0ac915f874845d2f0bf26effc4af26f80c8bf498ef08d863d1ac072d8177,PodS
andboxId:2c85f617f556b93f76c665875b78b587e1dd47cee508740e24275ba1ea021bf9,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761991840821940954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfd96429d7f7575fe65285b5903ca594,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:35509da8a528e42be0f838e8fafb95aecf866da7c5e6b7a3463389d69257be53,PodSandboxId:1e98d8cc22231bc02a76b8b9c2987dbeb6c87d700ccceadb0052cc40b4b50f66,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:7,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761991840759609644,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ef73b8d782106f4ce68a921abfa7e79,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39af2ac349d69d2fd9b2dcd027c4c4e1bb63a93721f12f8f0dfc9945567c869c,PodSandboxId:a312f42dd95c1bbe80d4e02d4b83b3aebca6f86c6b1ee0e6490bd5c23dea3af7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761991840663345376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832ac4e926fa9d3ad2ccc452d513f863,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":102
59,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8f94a42d-634d-4795-b32e-dcdd404267f3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:19:12 functional-950389 crio[14025]: time="2025-11-01 10:19:12.152016532Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f0793386-e9ad-42d0-9f24-0863c59f52a1 name=/runtime.v1.RuntimeService/Version
	Nov 01 10:19:12 functional-950389 crio[14025]: time="2025-11-01 10:19:12.152136667Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f0793386-e9ad-42d0-9f24-0863c59f52a1 name=/runtime.v1.RuntimeService/Version
	Nov 01 10:19:12 functional-950389 crio[14025]: time="2025-11-01 10:19:12.153378570Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=593eaf32-4559-4ef7-b0e1-f1bddcc6ae01 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:19:12 functional-950389 crio[14025]: time="2025-11-01 10:19:12.155608974Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761992352155584777,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:220510,},InodesUsed:&UInt64Value{Value:108,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=593eaf32-4559-4ef7-b0e1-f1bddcc6ae01 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:19:12 functional-950389 crio[14025]: time="2025-11-01 10:19:12.156557518Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=245123c2-ffa8-4c83-bdbe-295452a62ccd name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:19:12 functional-950389 crio[14025]: time="2025-11-01 10:19:12.156887900Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=245123c2-ffa8-4c83-bdbe-295452a62ccd name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:19:12 functional-950389 crio[14025]: time="2025-11-01 10:19:12.158135222Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c2e6bc4021502c2f170d620789895f85434175172b836be9f091032da1f39fa8,PodSandboxId:cbcb2c689c2b5b1ca9aee13165b781ce089583dd6c1a35065fd259801c182c5a,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1761992045288650075,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60978975-8366-41aa-b97a-93a1c86afe6c,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5adf9564ca6356300d89661e1ffa64e5520291c9b1c46f0c7a2926733d15d16,PodSandboxId:6b974e48e60d25ac75bba8ab0213c66b917b8dbd93434957b6253c0360c22ff1,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1761991932929384265,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-5bb876957f-nnckx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dd6e3839-b3cf-4ec6-ad2c-a1c6d778799b,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"con
tainerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:774e7ac99a1544464667b47cbd5816a16cc6563f1fe95df87265068d45b7b5be,PodSandboxId:c72bccebda6e887772cc3ebb1c270fdd53869617cc542c8a94cc7aea17813bbf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761991895756692686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6ps9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a502e626-8a66-4687-9b76-053029dabdd6,},Annotations:map[string]string{io.kubernetes.container.hash:
e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c5bd32acfec70d052cccdfa7390d7b4719458133a155dea0720eaba86242817,PodSandboxId:1756d604cc85acc0c421877fbc05570e2e81d803392c957e624b52e6d6eff162,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5b
af0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761991895631754402,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5lfgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c9758ea-cd15-49e2-893c-e78ed7d30f55,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bea04e4d0bfb829facb73a10dc15c1004846f241b01c955bda0363f2503928b,PodSa
ndboxId:f63e3783b00886a154bf100a9ebb0b076d19678c123b9ffbc7dfc8f57e8c3606,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761991895284172691,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jtt6l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0c48c32-fe99-40bf-b651-b04105adec6b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:273ca0fdb8a91fa3e741bb6943c95a6dfb8153f908ad4f5a083bd2881a67b410,PodSandboxId:26487b7cf6878efb0c7f1e22a
323df3aa0e558454c7dda1e253950db7cf123de,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761991895219809116,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 437a47af-7662-481d-b1b7-09379f4069c9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:186f819959a1348c2af4e54006dff0c66c8117b0b9eb535946854568b1f2ca15,PodSandboxId:4761b5b9cfad0377a16669089b1cecc494620
1e1cea0b53be0485d7c076615b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761991891634539228,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832ac4e926fa9d3ad2ccc452d513f863,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58eb2
8533db8250617e51db9afe575300a277c682560c5c14a995c9b177e1dfc,PodSandboxId:88cad2727994a82f17a4c01e02294b6cf9ec60e5044d128765b3d40d437bb81c,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761991891641031515,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfd96429d7f7575fe65285b5903ca594,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePol
icy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5396a2c9c58870683b192a3fdf384b920eb701896ab62edc1c65efb081a3154,PodSandboxId:8a83bc2cd1d4b677af16c3abc1eaca1220ee5ff9436474a39929f997474b3120,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761991891584043691,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b43be368500d83ae97f6110abbf40e1,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e90b96d4e97286fd540c2d19181b1868c42d2dab4d68fa696bfc7108af3a7c77,PodSandboxId:139a554d695f05bb58bfd0e2ed05c5140d33cda05ed8689377af883672a72ae2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:8,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761991891565324561,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ef73b8d782106f4ce68a921abfa7e79,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [
{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2f46ea37f76ee460b92d3ba6f03808bc8727f53b3a42d9893fdf5c315adca62,PodSandboxId:68f5ead72a631362f93ee11bdee1d8769a1750005f9dc29ecaa9a9dda9915446,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761991853869284413,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 437a47af-7662-481d-b1b7-09379f
4069c9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d93163c734255a1ab1e2fb85b8e779b5d73e343d2f4346e8d26789c07b26e92,PodSandboxId:d9b89241d322c09b20ad49ff28f27f36d53287ba1b6d7a58950a03a0850382b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761991853344196655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5lfgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c9758ea-cd15-49e2-893c-e78ed7d30f55,},Annotations:map[string]
string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f54db03325aa1e112ef490697e276720ce6a9670cd42150d84459926faee346,PodSandboxId:6554bb887a4e46ae587ab3ce51abeef07f3943cca0a6a8c95ee7d822cf987118,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0
d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761991853156888745,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6ps9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a502e626-8a66-4687-9b76-053029dabdd6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62537ebd8ec8cfb85842acc4c972c6f4c2e
963731421d69f8dd4ef7d38a28f75,PodSandboxId:17b51286c59a8bc3eced4325f549d7a6976f82c95cf65ed363eff1520f95a85f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761991852423478827,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jtt6l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0c48c32-fe99-40bf-b651-b04105adec6b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f49d0ac915f874845d2f0bf26effc4af26f80c8bf498ef08d863d1ac072d8177,PodS
andboxId:2c85f617f556b93f76c665875b78b587e1dd47cee508740e24275ba1ea021bf9,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761991840821940954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfd96429d7f7575fe65285b5903ca594,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:35509da8a528e42be0f838e8fafb95aecf866da7c5e6b7a3463389d69257be53,PodSandboxId:1e98d8cc22231bc02a76b8b9c2987dbeb6c87d700ccceadb0052cc40b4b50f66,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:7,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761991840759609644,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ef73b8d782106f4ce68a921abfa7e79,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39af2ac349d69d2fd9b2dcd027c4c4e1bb63a93721f12f8f0dfc9945567c869c,PodSandboxId:a312f42dd95c1bbe80d4e02d4b83b3aebca6f86c6b1ee0e6490bd5c23dea3af7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761991840663345376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832ac4e926fa9d3ad2ccc452d513f863,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":102
59,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=245123c2-ffa8-4c83-bdbe-295452a62ccd name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:19:12 functional-950389 crio[14025]: time="2025-11-01 10:19:12.196635447Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=02ac9ccd-9a49-437a-92e8-b9b02dd63f06 name=/runtime.v1.RuntimeService/Version
	Nov 01 10:19:12 functional-950389 crio[14025]: time="2025-11-01 10:19:12.196711790Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=02ac9ccd-9a49-437a-92e8-b9b02dd63f06 name=/runtime.v1.RuntimeService/Version
	Nov 01 10:19:12 functional-950389 crio[14025]: time="2025-11-01 10:19:12.198415573Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a478409e-4110-412a-bd13-cb139234b538 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:19:12 functional-950389 crio[14025]: time="2025-11-01 10:19:12.199226789Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761992352199039558,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:220510,},InodesUsed:&UInt64Value{Value:108,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a478409e-4110-412a-bd13-cb139234b538 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:19:12 functional-950389 crio[14025]: time="2025-11-01 10:19:12.200209946Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b43818ab-4ae9-480e-9b16-774c79b32d95 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:19:12 functional-950389 crio[14025]: time="2025-11-01 10:19:12.200281700Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b43818ab-4ae9-480e-9b16-774c79b32d95 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:19:12 functional-950389 crio[14025]: time="2025-11-01 10:19:12.200629870Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c2e6bc4021502c2f170d620789895f85434175172b836be9f091032da1f39fa8,PodSandboxId:cbcb2c689c2b5b1ca9aee13165b781ce089583dd6c1a35065fd259801c182c5a,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1761992045288650075,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60978975-8366-41aa-b97a-93a1c86afe6c,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5adf9564ca6356300d89661e1ffa64e5520291c9b1c46f0c7a2926733d15d16,PodSandboxId:6b974e48e60d25ac75bba8ab0213c66b917b8dbd93434957b6253c0360c22ff1,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1761991932929384265,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-5bb876957f-nnckx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dd6e3839-b3cf-4ec6-ad2c-a1c6d778799b,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"con
tainerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:774e7ac99a1544464667b47cbd5816a16cc6563f1fe95df87265068d45b7b5be,PodSandboxId:c72bccebda6e887772cc3ebb1c270fdd53869617cc542c8a94cc7aea17813bbf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761991895756692686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6ps9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a502e626-8a66-4687-9b76-053029dabdd6,},Annotations:map[string]string{io.kubernetes.container.hash:
e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c5bd32acfec70d052cccdfa7390d7b4719458133a155dea0720eaba86242817,PodSandboxId:1756d604cc85acc0c421877fbc05570e2e81d803392c957e624b52e6d6eff162,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5b
af0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761991895631754402,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5lfgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c9758ea-cd15-49e2-893c-e78ed7d30f55,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bea04e4d0bfb829facb73a10dc15c1004846f241b01c955bda0363f2503928b,PodSa
ndboxId:f63e3783b00886a154bf100a9ebb0b076d19678c123b9ffbc7dfc8f57e8c3606,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761991895284172691,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jtt6l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0c48c32-fe99-40bf-b651-b04105adec6b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:273ca0fdb8a91fa3e741bb6943c95a6dfb8153f908ad4f5a083bd2881a67b410,PodSandboxId:26487b7cf6878efb0c7f1e22a
323df3aa0e558454c7dda1e253950db7cf123de,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761991895219809116,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 437a47af-7662-481d-b1b7-09379f4069c9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:186f819959a1348c2af4e54006dff0c66c8117b0b9eb535946854568b1f2ca15,PodSandboxId:4761b5b9cfad0377a16669089b1cecc494620
1e1cea0b53be0485d7c076615b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761991891634539228,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832ac4e926fa9d3ad2ccc452d513f863,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58eb2
8533db8250617e51db9afe575300a277c682560c5c14a995c9b177e1dfc,PodSandboxId:88cad2727994a82f17a4c01e02294b6cf9ec60e5044d128765b3d40d437bb81c,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761991891641031515,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfd96429d7f7575fe65285b5903ca594,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePol
icy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5396a2c9c58870683b192a3fdf384b920eb701896ab62edc1c65efb081a3154,PodSandboxId:8a83bc2cd1d4b677af16c3abc1eaca1220ee5ff9436474a39929f997474b3120,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761991891584043691,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b43be368500d83ae97f6110abbf40e1,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e90b96d4e97286fd540c2d19181b1868c42d2dab4d68fa696bfc7108af3a7c77,PodSandboxId:139a554d695f05bb58bfd0e2ed05c5140d33cda05ed8689377af883672a72ae2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:8,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761991891565324561,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ef73b8d782106f4ce68a921abfa7e79,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [
{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2f46ea37f76ee460b92d3ba6f03808bc8727f53b3a42d9893fdf5c315adca62,PodSandboxId:68f5ead72a631362f93ee11bdee1d8769a1750005f9dc29ecaa9a9dda9915446,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761991853869284413,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 437a47af-7662-481d-b1b7-09379f
4069c9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d93163c734255a1ab1e2fb85b8e779b5d73e343d2f4346e8d26789c07b26e92,PodSandboxId:d9b89241d322c09b20ad49ff28f27f36d53287ba1b6d7a58950a03a0850382b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761991853344196655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5lfgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c9758ea-cd15-49e2-893c-e78ed7d30f55,},Annotations:map[string]
string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f54db03325aa1e112ef490697e276720ce6a9670cd42150d84459926faee346,PodSandboxId:6554bb887a4e46ae587ab3ce51abeef07f3943cca0a6a8c95ee7d822cf987118,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0
d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761991853156888745,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6ps9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a502e626-8a66-4687-9b76-053029dabdd6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62537ebd8ec8cfb85842acc4c972c6f4c2e
963731421d69f8dd4ef7d38a28f75,PodSandboxId:17b51286c59a8bc3eced4325f549d7a6976f82c95cf65ed363eff1520f95a85f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761991852423478827,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jtt6l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0c48c32-fe99-40bf-b651-b04105adec6b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f49d0ac915f874845d2f0bf26effc4af26f80c8bf498ef08d863d1ac072d8177,PodS
andboxId:2c85f617f556b93f76c665875b78b587e1dd47cee508740e24275ba1ea021bf9,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761991840821940954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfd96429d7f7575fe65285b5903ca594,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:35509da8a528e42be0f838e8fafb95aecf866da7c5e6b7a3463389d69257be53,PodSandboxId:1e98d8cc22231bc02a76b8b9c2987dbeb6c87d700ccceadb0052cc40b4b50f66,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:7,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761991840759609644,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ef73b8d782106f4ce68a921abfa7e79,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39af2ac349d69d2fd9b2dcd027c4c4e1bb63a93721f12f8f0dfc9945567c869c,PodSandboxId:a312f42dd95c1bbe80d4e02d4b83b3aebca6f86c6b1ee0e6490bd5c23dea3af7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761991840663345376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832ac4e926fa9d3ad2ccc452d513f863,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":102
59,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b43818ab-4ae9-480e-9b16-774c79b32d95 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c2e6bc4021502       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   5 minutes ago       Exited              mount-munger              0                   cbcb2c689c2b5       busybox-mount
	a5adf9564ca63       docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb       6 minutes ago       Running             mysql                     0                   6b974e48e60d2       mysql-5bb876957f-nnckx
	774e7ac99a154       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      7 minutes ago       Running             coredns                   1                   c72bccebda6e8       coredns-66bc5c9577-6ps9z
	9c5bd32acfec7       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      7 minutes ago       Running             coredns                   1                   1756d604cc85a       coredns-66bc5c9577-5lfgh
	0bea04e4d0bfb       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      7 minutes ago       Running             kube-proxy                1                   f63e3783b0088       kube-proxy-jtt6l
	273ca0fdb8a91       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       1                   26487b7cf6878       storage-provisioner
	58eb28533db82       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      7 minutes ago       Running             etcd                      4                   88cad2727994a       etcd-functional-950389
	186f819959a13       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      7 minutes ago       Running             kube-scheduler            4                   4761b5b9cfad0       kube-scheduler-functional-950389
	b5396a2c9c588       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      7 minutes ago       Running             kube-apiserver            0                   8a83bc2cd1d4b       kube-apiserver-functional-950389
	e90b96d4e9728       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      7 minutes ago       Running             kube-controller-manager   8                   139a554d695f0       kube-controller-manager-functional-950389
	a2f46ea37f76e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago       Exited              storage-provisioner       0                   68f5ead72a631       storage-provisioner
	8d93163c73425       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      8 minutes ago       Exited              coredns                   0                   d9b89241d322c       coredns-66bc5c9577-5lfgh
	9f54db03325aa       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      8 minutes ago       Exited              coredns                   0                   6554bb887a4e4       coredns-66bc5c9577-6ps9z
	62537ebd8ec8c       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      8 minutes ago       Exited              kube-proxy                0                   17b51286c59a8       kube-proxy-jtt6l
	f49d0ac915f87       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      8 minutes ago       Exited              etcd                      3                   2c85f617f556b       etcd-functional-950389
	35509da8a528e       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      8 minutes ago       Exited              kube-controller-manager   7                   1e98d8cc22231       kube-controller-manager-functional-950389
	39af2ac349d69       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      8 minutes ago       Exited              kube-scheduler            3                   a312f42dd95c1       kube-scheduler-functional-950389
	
	
	==> coredns [774e7ac99a1544464667b47cbd5816a16cc6563f1fe95df87265068d45b7b5be] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	
	
	==> coredns [8d93163c734255a1ab1e2fb85b8e779b5d73e343d2f4346e8d26789c07b26e92] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9c5bd32acfec70d052cccdfa7390d7b4719458133a155dea0720eaba86242817] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	
	
	==> coredns [9f54db03325aa1e112ef490697e276720ce6a9670cd42150d84459926faee346] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-950389
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-950389
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=functional-950389
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_10_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:10:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-950389
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:19:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:18:33 +0000   Sat, 01 Nov 2025 10:10:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:18:33 +0000   Sat, 01 Nov 2025 10:10:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:18:33 +0000   Sat, 01 Nov 2025 10:10:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:18:33 +0000   Sat, 01 Nov 2025 10:10:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.40
	  Hostname:    functional-950389
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 50219d7eb2434b54bfeb5a3ddfefd678
	  System UUID:                50219d7e-b243-4b54-bfeb-5a3ddfefd678
	  Boot ID:                    f9fafb52-9d25-4c51-b234-2193020a6a0b
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-wws2s                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m57s
	  default                     hello-node-connect-7d85dfc575-t7gtf           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m10s
	  default                     mysql-5bb876957f-nnckx                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    7m13s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m4s
	  kube-system                 coredns-66bc5c9577-5lfgh                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m20s
	  kube-system                 coredns-66bc5c9577-6ps9z                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     8m20s
	  kube-system                 etcd-functional-950389                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         8m25s
	  kube-system                 kube-apiserver-functional-950389              250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m36s
	  kube-system                 kube-controller-manager-functional-950389     200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m25s
	  kube-system                 kube-proxy-jtt6l                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m21s
	  kube-system                 kube-scheduler-functional-950389              100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m25s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m19s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-ljjhn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-wm424         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (72%)  700m (35%)
	  memory             752Mi (19%)  1040Mi (26%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m19s                  kube-proxy       
	  Normal  Starting                 7m36s                  kube-proxy       
	  Normal  Starting                 8m32s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m32s (x2 over 8m32s)  kubelet          Node functional-950389 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m32s (x2 over 8m32s)  kubelet          Node functional-950389 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m32s (x2 over 8m32s)  kubelet          Node functional-950389 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m26s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m25s                  kubelet          Node functional-950389 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m25s                  kubelet          Node functional-950389 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m25s                  kubelet          Node functional-950389 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8m22s                  node-controller  Node functional-950389 event: Registered Node functional-950389 in Controller
	  Normal  Starting                 7m42s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m41s (x8 over 7m42s)  kubelet          Node functional-950389 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m41s (x8 over 7m42s)  kubelet          Node functional-950389 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m41s (x7 over 7m42s)  kubelet          Node functional-950389 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m35s                  node-controller  Node functional-950389 event: Registered Node functional-950389 in Controller
	
	
	==> dmesg <==
	[  +0.000074] kauditd_printk_skb: 18 callbacks suppressed
	[ +10.833497] kauditd_printk_skb: 249 callbacks suppressed
	[ +20.643469] kauditd_printk_skb: 38 callbacks suppressed
	[ +13.048600] kauditd_printk_skb: 11 callbacks suppressed
	[Nov 1 10:06] kauditd_printk_skb: 263 callbacks suppressed
	[ +13.559184] kauditd_printk_skb: 154 callbacks suppressed
	[Nov 1 10:07] kauditd_printk_skb: 5 callbacks suppressed
	[ +11.574466] kauditd_printk_skb: 5 callbacks suppressed
	[Nov 1 10:08] kauditd_printk_skb: 5 callbacks suppressed
	[Nov 1 10:10] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.100652] kauditd_printk_skb: 28 callbacks suppressed
	[  +0.156569] kauditd_printk_skb: 132 callbacks suppressed
	[  +0.226013] kauditd_printk_skb: 12 callbacks suppressed
	[Nov 1 10:11] kauditd_printk_skb: 170 callbacks suppressed
	[  +0.112112] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.976961] kauditd_printk_skb: 232 callbacks suppressed
	[  +4.355116] kauditd_printk_skb: 154 callbacks suppressed
	[ +18.317316] kauditd_printk_skb: 167 callbacks suppressed
	[Nov 1 10:12] kauditd_printk_skb: 97 callbacks suppressed
	[  +0.000026] kauditd_printk_skb: 80 callbacks suppressed
	[  +0.000138] kauditd_printk_skb: 95 callbacks suppressed
	[  +6.081767] kauditd_printk_skb: 26 callbacks suppressed
	[Nov 1 10:14] kauditd_printk_skb: 29 callbacks suppressed
	[Nov 1 10:15] kauditd_printk_skb: 68 callbacks suppressed
	[Nov 1 10:18] crun[18408]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	
	
	==> etcd [58eb28533db8250617e51db9afe575300a277c682560c5c14a995c9b177e1dfc] <==
	{"level":"warn","ts":"2025-11-01T10:11:33.817095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:11:33.863396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46438","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T10:12:08.583551Z","caller":"traceutil/trace.go:172","msg":"trace[1153351526] transaction","detail":"{read_only:false; response_revision:606; number_of_response:1; }","duration":"360.077961ms","start":"2025-11-01T10:12:08.223461Z","end":"2025-11-01T10:12:08.583539Z","steps":["trace[1153351526] 'process raft request'  (duration: 360.001164ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:12:08.584011Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T10:12:08.223440Z","time spent":"360.166919ms","remote":"127.0.0.1:45660","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1934,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/default/sp-pod\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/default/sp-pod\" value_size:1897 >> failure:<>"}
	{"level":"info","ts":"2025-11-01T10:12:11.557197Z","caller":"traceutil/trace.go:172","msg":"trace[723015122] linearizableReadLoop","detail":"{readStateIndex:656; appliedIndex:656; }","duration":"429.403939ms","start":"2025-11-01T10:12:11.127776Z","end":"2025-11-01T10:12:11.557180Z","steps":["trace[723015122] 'read index received'  (duration: 429.399712ms)","trace[723015122] 'applied index is now lower than readState.Index'  (duration: 3.459µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T10:12:11.557472Z","caller":"traceutil/trace.go:172","msg":"trace[657617144] transaction","detail":"{read_only:false; response_revision:611; number_of_response:1; }","duration":"464.683385ms","start":"2025-11-01T10:12:11.092781Z","end":"2025-11-01T10:12:11.557464Z","steps":["trace[657617144] 'process raft request'  (duration: 464.558391ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:12:11.558571Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T10:12:11.092765Z","time spent":"465.635981ms","remote":"127.0.0.1:45614","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:610 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-11-01T10:12:11.557687Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"429.873539ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T10:12:11.560319Z","caller":"traceutil/trace.go:172","msg":"trace[866693380] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:611; }","duration":"432.534968ms","start":"2025-11-01T10:12:11.127773Z","end":"2025-11-01T10:12:11.560308Z","steps":["trace[866693380] 'agreement among raft nodes before linearized reading'  (duration: 429.857759ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:12:11.560351Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T10:12:11.127757Z","time spent":"432.58258ms","remote":"127.0.0.1:45660","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2025-11-01T10:12:14.164943Z","caller":"traceutil/trace.go:172","msg":"trace[762212931] transaction","detail":"{read_only:false; response_revision:617; number_of_response:1; }","duration":"217.662746ms","start":"2025-11-01T10:12:13.947270Z","end":"2025-11-01T10:12:14.164932Z","steps":["trace[762212931] 'process raft request'  (duration: 217.469939ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:12:20.244562Z","caller":"traceutil/trace.go:172","msg":"trace[666249141] linearizableReadLoop","detail":"{readStateIndex:693; appliedIndex:693; }","duration":"427.871999ms","start":"2025-11-01T10:12:19.816674Z","end":"2025-11-01T10:12:20.244546Z","steps":["trace[666249141] 'read index received'  (duration: 427.867439ms)","trace[666249141] 'applied index is now lower than readState.Index'  (duration: 3.794µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T10:12:20.244658Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"427.993311ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T10:12:20.244674Z","caller":"traceutil/trace.go:172","msg":"trace[1789657826] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:645; }","duration":"428.022762ms","start":"2025-11-01T10:12:19.816647Z","end":"2025-11-01T10:12:20.244669Z","steps":["trace[1789657826] 'agreement among raft nodes before linearized reading'  (duration: 427.978929ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:12:20.244757Z","caller":"traceutil/trace.go:172","msg":"trace[1443099741] transaction","detail":"{read_only:false; response_revision:646; number_of_response:1; }","duration":"622.159008ms","start":"2025-11-01T10:12:19.622588Z","end":"2025-11-01T10:12:20.244747Z","steps":["trace[1443099741] 'process raft request'  (duration: 622.037625ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:12:20.244849Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"406.619182ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T10:12:20.245561Z","caller":"traceutil/trace.go:172","msg":"trace[1986868595] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:646; }","duration":"407.332043ms","start":"2025-11-01T10:12:19.838220Z","end":"2025-11-01T10:12:20.245553Z","steps":["trace[1986868595] 'agreement among raft nodes before linearized reading'  (duration: 406.601192ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:12:20.245621Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T10:12:19.838199Z","time spent":"407.411542ms","remote":"127.0.0.1:45660","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-11-01T10:12:20.244878Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"117.365768ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T10:12:20.245875Z","caller":"traceutil/trace.go:172","msg":"trace[2012618942] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:646; }","duration":"118.354669ms","start":"2025-11-01T10:12:20.127509Z","end":"2025-11-01T10:12:20.245864Z","steps":["trace[2012618942] 'agreement among raft nodes before linearized reading'  (duration: 117.357716ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:12:20.244894Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"201.996816ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T10:12:20.246312Z","caller":"traceutil/trace.go:172","msg":"trace[524363206] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:646; }","duration":"203.322528ms","start":"2025-11-01T10:12:20.042894Z","end":"2025-11-01T10:12:20.246217Z","steps":["trace[524363206] 'agreement among raft nodes before linearized reading'  (duration: 201.992234ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:12:20.245252Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"279.558918ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T10:12:20.246654Z","caller":"traceutil/trace.go:172","msg":"trace[1402734132] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:646; }","duration":"280.961849ms","start":"2025-11-01T10:12:19.965685Z","end":"2025-11-01T10:12:20.246647Z","steps":["trace[1402734132] 'agreement among raft nodes before linearized reading'  (duration: 279.545002ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:12:20.246164Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T10:12:19.622571Z","time spent":"622.75989ms","remote":"127.0.0.1:45614","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:645 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> etcd [f49d0ac915f874845d2f0bf26effc4af26f80c8bf498ef08d863d1ac072d8177] <==
	{"level":"warn","ts":"2025-11-01T10:10:43.150416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:43.155518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:43.174317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:43.176931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:43.185561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:43.248134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57298","server-name":"","error":"EOF"}
	2025/11/01 10:10:46 WARNING: [core] [Server #3]grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2025-11-01T10:11:12.108518Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-01T10:11:12.108993Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-950389","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.40:2380"],"advertise-client-urls":["https://192.168.39.40:2379"]}
	{"level":"error","ts":"2025-11-01T10:11:12.109280Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T10:11:12.202428Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T10:11:12.202509Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:11:12.202539Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"1088a855a4aa8d0a","current-leader-member-id":"1088a855a4aa8d0a"}
	{"level":"info","ts":"2025-11-01T10:11:12.202652Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-01T10:11:12.202687Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-01T10:11:12.202972Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T10:11:12.203180Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T10:11:12.203216Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-01T10:11:12.203272Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.40:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T10:11:12.203301Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.40:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T10:11:12.203309Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.40:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:11:12.206201Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.40:2380"}
	{"level":"error","ts":"2025-11-01T10:11:12.206290Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.40:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:11:12.206334Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.40:2380"}
	{"level":"info","ts":"2025-11-01T10:11:12.206372Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-950389","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.40:2380"],"advertise-client-urls":["https://192.168.39.40:2379"]}
	
	
	==> kernel <==
	 10:19:12 up 15 min,  0 users,  load average: 0.23, 0.40, 0.32
	Linux functional-950389 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Oct 28 16:58:05 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [b5396a2c9c58870683b192a3fdf384b920eb701896ab62edc1c65efb081a3154] <==
	I1101 10:11:34.675834       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 10:11:34.679471       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 10:11:34.674692       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 10:11:34.693006       1 policy_source.go:240] refreshing policies
	I1101 10:11:34.693738       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:11:34.716576       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1101 10:11:34.744709       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 10:11:35.480463       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:11:36.463365       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:11:36.515464       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 10:11:36.570550       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:11:36.583288       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:11:37.941140       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:11:38.189636       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:11:55.076042       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.104.65.198"}
	I1101 10:11:59.787637       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.107.182.210"}
	I1101 10:11:59.833818       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:12:02.120377       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.100.78.168"}
	I1101 10:12:16.031264       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.111.83.65"}
	E1101 10:12:20.371618       1 conn.go:339] Error on socket receive: read tcp 192.168.39.40:8441->192.168.39.1:47384: use of closed network connection
	E1101 10:12:21.465827       1 conn.go:339] Error on socket receive: read tcp 192.168.39.40:8441->192.168.39.1:47396: use of closed network connection
	E1101 10:12:23.108237       1 conn.go:339] Error on socket receive: read tcp 192.168.39.40:8441->192.168.39.1:47410: use of closed network connection
	I1101 10:14:12.065414       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:14:12.302789       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.234.201"}
	I1101 10:14:12.371558       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.63.107"}
	
	
	==> kube-controller-manager [35509da8a528e42be0f838e8fafb95aecf866da7c5e6b7a3463389d69257be53] <==
	I1101 10:10:50.913409       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 10:10:50.913448       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 10:10:50.913471       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 10:10:50.913519       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 10:10:50.913525       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 10:10:50.913881       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1101 10:10:50.926331       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 10:10:50.929618       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="functional-950389" podCIDRs=["10.244.0.0/24"]
	I1101 10:10:50.941302       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:10:50.942033       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:10:50.944162       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:10:50.952149       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 10:10:50.952184       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 10:10:50.953662       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 10:10:50.953913       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 10:10:50.955705       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 10:10:50.955023       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 10:10:50.956044       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 10:10:50.955035       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 10:10:50.955053       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 10:10:50.956890       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 10:10:50.956977       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 10:10:50.957028       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-950389"
	I1101 10:10:50.957054       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 10:10:50.959752       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	
	
	==> kube-controller-manager [e90b96d4e97286fd540c2d19181b1868c42d2dab4d68fa696bfc7108af3a7c77] <==
	I1101 10:11:37.986823       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 10:11:37.986886       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 10:11:37.986947       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 10:11:37.986998       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 10:11:37.989290       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 10:11:37.990575       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 10:11:37.993905       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 10:11:37.993951       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 10:11:37.994110       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 10:11:37.994161       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-950389"
	I1101 10:11:37.994218       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 10:11:37.998608       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 10:11:38.005600       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1101 10:11:38.006324       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 10:11:38.013038       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 10:11:38.016501       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	E1101 10:14:12.164598       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 10:14:12.179617       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 10:14:12.185434       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 10:14:12.191012       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 10:14:12.200187       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 10:14:12.204924       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 10:14:12.208759       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 10:14:12.223004       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 10:14:12.223880       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [0bea04e4d0bfb829facb73a10dc15c1004846f241b01c955bda0363f2503928b] <==
	I1101 10:11:36.117752       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:11:36.219368       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:11:36.219424       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.40"]
	E1101 10:11:36.219530       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:11:36.305697       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1101 10:11:36.305750       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 10:11:36.305787       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:11:36.338703       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:11:36.339488       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:11:36.339577       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:11:36.348605       1 config.go:200] "Starting service config controller"
	I1101 10:11:36.348887       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:11:36.350863       1 config.go:309] "Starting node config controller"
	I1101 10:11:36.353887       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:11:36.353899       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:11:36.352809       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:11:36.353905       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:11:36.352796       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:11:36.362327       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:11:36.449909       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:11:36.455219       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:11:36.465605       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [62537ebd8ec8cfb85842acc4c972c6f4c2e963731421d69f8dd4ef7d38a28f75] <==
	I1101 10:10:52.788695       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:10:52.892404       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:10:52.892608       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.40"]
	E1101 10:10:52.893113       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:10:53.042226       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1101 10:10:53.042394       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 10:10:53.042428       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:10:53.079282       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:10:53.082808       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:10:53.082825       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:10:53.089905       1 config.go:200] "Starting service config controller"
	I1101 10:10:53.089918       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:10:53.089930       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:10:53.089933       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:10:53.089940       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:10:53.089944       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:10:53.097801       1 config.go:309] "Starting node config controller"
	I1101 10:10:53.097813       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:10:53.097819       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:10:53.190556       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:10:53.190589       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:10:53.190606       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [186f819959a1348c2af4e54006dff0c66c8117b0b9eb535946854568b1f2ca15] <==
	I1101 10:11:33.157776       1 serving.go:386] Generated self-signed cert in-memory
	W1101 10:11:34.552390       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 10:11:34.552487       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 10:11:34.552513       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 10:11:34.552535       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 10:11:34.599177       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:11:34.599216       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:11:34.604231       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:11:34.604272       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:11:34.606242       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:11:34.606469       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:11:34.706285       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [39af2ac349d69d2fd9b2dcd027c4c4e1bb63a93721f12f8f0dfc9945567c869c] <==
	E1101 10:10:44.508024       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 10:10:44.508189       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 10:10:44.508313       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 10:10:44.508437       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 10:10:44.508575       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 10:10:44.508764       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 10:10:44.508894       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 10:10:44.509295       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 10:10:44.509651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 10:10:44.509662       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 10:10:44.509740       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 10:10:44.509795       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 10:10:44.509871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 10:10:44.510191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 10:10:44.510224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:10:44.510349       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 10:10:45.312670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 10:10:45.669724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1101 10:10:48.397806       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:11:12.107394       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1101 10:11:12.107447       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1101 10:11:12.114303       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1101 10:11:12.114823       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:11:12.118329       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1101 10:11:12.118374       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 01 10:18:25 functional-950389 kubelet[14397]: E1101 10:18:25.707637   14397 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="9629694c-f849-48d0-8099-8989879acb4b"
	Nov 01 10:18:30 functional-950389 kubelet[14397]: E1101 10:18:30.811586   14397 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod832ac4e926fa9d3ad2ccc452d513f863/crio-a312f42dd95c1bbe80d4e02d4b83b3aebca6f86c6b1ee0e6490bd5c23dea3af7: Error finding container a312f42dd95c1bbe80d4e02d4b83b3aebca6f86c6b1ee0e6490bd5c23dea3af7: Status 404 returned error can't find the container with id a312f42dd95c1bbe80d4e02d4b83b3aebca6f86c6b1ee0e6490bd5c23dea3af7
	Nov 01 10:18:30 functional-950389 kubelet[14397]: E1101 10:18:30.812221   14397 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod437a47af-7662-481d-b1b7-09379f4069c9/crio-68f5ead72a631362f93ee11bdee1d8769a1750005f9dc29ecaa9a9dda9915446: Error finding container 68f5ead72a631362f93ee11bdee1d8769a1750005f9dc29ecaa9a9dda9915446: Status 404 returned error can't find the container with id 68f5ead72a631362f93ee11bdee1d8769a1750005f9dc29ecaa9a9dda9915446
	Nov 01 10:18:30 functional-950389 kubelet[14397]: E1101 10:18:30.812525   14397 manager.go:1116] Failed to create existing container: /kubepods/burstable/poda502e626-8a66-4687-9b76-053029dabdd6/crio-6554bb887a4e46ae587ab3ce51abeef07f3943cca0a6a8c95ee7d822cf987118: Error finding container 6554bb887a4e46ae587ab3ce51abeef07f3943cca0a6a8c95ee7d822cf987118: Status 404 returned error can't find the container with id 6554bb887a4e46ae587ab3ce51abeef07f3943cca0a6a8c95ee7d822cf987118
	Nov 01 10:18:30 functional-950389 kubelet[14397]: E1101 10:18:30.812900   14397 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod7c9758ea-cd15-49e2-893c-e78ed7d30f55/crio-d9b89241d322c09b20ad49ff28f27f36d53287ba1b6d7a58950a03a0850382b5: Error finding container d9b89241d322c09b20ad49ff28f27f36d53287ba1b6d7a58950a03a0850382b5: Status 404 returned error can't find the container with id d9b89241d322c09b20ad49ff28f27f36d53287ba1b6d7a58950a03a0850382b5
	Nov 01 10:18:30 functional-950389 kubelet[14397]: E1101 10:18:30.813427   14397 manager.go:1116] Failed to create existing container: /kubepods/burstable/podcfd96429d7f7575fe65285b5903ca594/crio-2c85f617f556b93f76c665875b78b587e1dd47cee508740e24275ba1ea021bf9: Error finding container 2c85f617f556b93f76c665875b78b587e1dd47cee508740e24275ba1ea021bf9: Status 404 returned error can't find the container with id 2c85f617f556b93f76c665875b78b587e1dd47cee508740e24275ba1ea021bf9
	Nov 01 10:18:30 functional-950389 kubelet[14397]: E1101 10:18:30.813744   14397 manager.go:1116] Failed to create existing container: /kubepods/besteffort/poda0c48c32-fe99-40bf-b651-b04105adec6b/crio-17b51286c59a8bc3eced4325f549d7a6976f82c95cf65ed363eff1520f95a85f: Error finding container 17b51286c59a8bc3eced4325f549d7a6976f82c95cf65ed363eff1520f95a85f: Status 404 returned error can't find the container with id 17b51286c59a8bc3eced4325f549d7a6976f82c95cf65ed363eff1520f95a85f
	Nov 01 10:18:30 functional-950389 kubelet[14397]: E1101 10:18:30.814140   14397 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod5ef73b8d782106f4ce68a921abfa7e79/crio-1e98d8cc22231bc02a76b8b9c2987dbeb6c87d700ccceadb0052cc40b4b50f66: Error finding container 1e98d8cc22231bc02a76b8b9c2987dbeb6c87d700ccceadb0052cc40b4b50f66: Status 404 returned error can't find the container with id 1e98d8cc22231bc02a76b8b9c2987dbeb6c87d700ccceadb0052cc40b4b50f66
	Nov 01 10:18:30 functional-950389 kubelet[14397]: E1101 10:18:30.940300   14397 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761992310939474442  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:220510}  inodes_used:{value:108}}"
	Nov 01 10:18:30 functional-950389 kubelet[14397]: E1101 10:18:30.940342   14397 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761992310939474442  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:220510}  inodes_used:{value:108}}"
	Nov 01 10:18:40 functional-950389 kubelet[14397]: E1101 10:18:40.708316   14397 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="9629694c-f849-48d0-8099-8989879acb4b"
	Nov 01 10:18:40 functional-950389 kubelet[14397]: E1101 10:18:40.942021   14397 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761992320941757048  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:220510}  inodes_used:{value:108}}"
	Nov 01 10:18:40 functional-950389 kubelet[14397]: E1101 10:18:40.942043   14397 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761992320941757048  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:220510}  inodes_used:{value:108}}"
	Nov 01 10:18:45 functional-950389 kubelet[14397]: E1101 10:18:45.943030   14397 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Nov 01 10:18:45 functional-950389 kubelet[14397]: E1101 10:18:45.943153   14397 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Nov 01 10:18:45 functional-950389 kubelet[14397]: E1101 10:18:45.943601   14397 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-wm424_kubernetes-dashboard(6078d1e3-19c1-4501-9499-752f92e11376): ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 01 10:18:45 functional-950389 kubelet[14397]: E1101 10:18:45.943637   14397 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wm424" podUID="6078d1e3-19c1-4501-9499-752f92e11376"
	Nov 01 10:18:50 functional-950389 kubelet[14397]: E1101 10:18:50.943995   14397 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761992330943666594  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:220510}  inodes_used:{value:108}}"
	Nov 01 10:18:50 functional-950389 kubelet[14397]: E1101 10:18:50.944042   14397 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761992330943666594  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:220510}  inodes_used:{value:108}}"
	Nov 01 10:18:52 functional-950389 kubelet[14397]: E1101 10:18:52.714703   14397 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="9629694c-f849-48d0-8099-8989879acb4b"
	Nov 01 10:18:58 functional-950389 kubelet[14397]: E1101 10:18:58.713300   14397 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wm424" podUID="6078d1e3-19c1-4501-9499-752f92e11376"
	Nov 01 10:19:00 functional-950389 kubelet[14397]: E1101 10:19:00.948016   14397 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761992340947627470  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:220510}  inodes_used:{value:108}}"
	Nov 01 10:19:00 functional-950389 kubelet[14397]: E1101 10:19:00.948131   14397 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761992340947627470  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:220510}  inodes_used:{value:108}}"
	Nov 01 10:19:10 functional-950389 kubelet[14397]: E1101 10:19:10.951034   14397 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761992350950734715  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:220510}  inodes_used:{value:108}}"
	Nov 01 10:19:10 functional-950389 kubelet[14397]: E1101 10:19:10.951132   14397 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761992350950734715  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:220510}  inodes_used:{value:108}}"
	
	
	==> storage-provisioner [273ca0fdb8a91fa3e741bb6943c95a6dfb8153f908ad4f5a083bd2881a67b410] <==
	W1101 10:18:48.418286       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:18:50.421384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:18:50.431284       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:18:52.434950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:18:52.439949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:18:54.444128       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:18:54.449869       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:18:56.453526       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:18:56.459507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:18:58.463036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:18:58.469715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:19:00.473608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:19:00.479357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:19:02.482194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:19:02.490636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:19:04.494710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:19:04.499435       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:19:06.503429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:19:06.509175       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:19:08.512264       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:19:08.517234       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:19:10.520876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:19:10.530183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:19:12.534267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:19:12.548424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [a2f46ea37f76ee460b92d3ba6f03808bc8727f53b3a42d9893fdf5c315adca62] <==
	W1101 10:10:53.971607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:10:53.971757       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:10:53.972439       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cc693950-c77e-4542-ade3-eb86356b8127", APIVersion:"v1", ResourceVersion:"374", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-950389_edc0ea9e-5089-482e-a4f3-2ad82dd73b48 became leader
	I1101 10:10:53.972524       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-950389_edc0ea9e-5089-482e-a4f3-2ad82dd73b48!
	W1101 10:10:53.976036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:10:53.986630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:10:54.073762       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-950389_edc0ea9e-5089-482e-a4f3-2ad82dd73b48!
	W1101 10:10:55.990039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:10:55.995479       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:10:57.999172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:10:58.005427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:11:00.008624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:11:00.014243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:11:02.018143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:11:02.022795       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:11:04.030153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:11:04.037311       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:11:06.041900       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:11:06.047277       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:11:08.051696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:11:08.062794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:11:10.069361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:11:10.086725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:11:12.091000       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:11:12.096762       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-950389 -n functional-950389
helpers_test.go:269: (dbg) Run:  kubectl --context functional-950389 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-wws2s hello-node-connect-7d85dfc575-t7gtf sp-pod dashboard-metrics-scraper-77bf4d6c4c-ljjhn kubernetes-dashboard-855c9754f9-wm424
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-950389 describe pod busybox-mount hello-node-75c85bcc94-wws2s hello-node-connect-7d85dfc575-t7gtf sp-pod dashboard-metrics-scraper-77bf4d6c4c-ljjhn kubernetes-dashboard-855c9754f9-wm424
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-950389 describe pod busybox-mount hello-node-75c85bcc94-wws2s hello-node-connect-7d85dfc575-t7gtf sp-pod dashboard-metrics-scraper-77bf4d6c4c-ljjhn kubernetes-dashboard-855c9754f9-wm424: exit status 1 (104.367133ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-950389/192.168.39.40
	Start Time:       Sat, 01 Nov 2025 10:12:24 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.15
	IPs:
	  IP:  10.244.0.15
	Containers:
	  mount-munger:
	    Container ID:  cri-o://c2e6bc4021502c2f170d620789895f85434175172b836be9f091032da1f39fa8
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 01 Nov 2025 10:14:05 +0000
	      Finished:     Sat, 01 Nov 2025 10:14:05 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2nqsv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-2nqsv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  6m49s  default-scheduler  Successfully assigned default/busybox-mount to functional-950389
	  Normal  Pulling    6m48s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m8s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 3.385s (1m40.236s including waiting). Image size: 4631262 bytes.
	  Normal  Created    5m8s   kubelet            Created container: mount-munger
	  Normal  Started    5m8s   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-wws2s
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-950389/192.168.39.40
	Start Time:       Sat, 01 Nov 2025 10:12:15 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.14
	IPs:
	  IP:           10.244.0.14
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zqfqw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-zqfqw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m58s                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-wws2s to functional-950389
	  Warning  Failed     2m16s (x2 over 5m12s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m16s (x2 over 5m12s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    2m5s (x2 over 5m11s)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     2m5s (x2 over 5m11s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    114s (x3 over 6m57s)   kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-t7gtf
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-950389/192.168.39.40
	Start Time:       Sat, 01 Nov 2025 10:12:02 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.12
	IPs:
	  IP:           10.244.0.12
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b7w5p (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-b7w5p:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  7m11s                default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-t7gtf to functional-950389
	  Warning  Failed     4m37s                kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     90s (x2 over 6m14s)  kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     90s (x3 over 6m14s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    51s (x5 over 6m14s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     51s (x5 over 6m14s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    39s (x4 over 7m11s)  kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-950389/192.168.39.40
	Start Time:       Sat, 01 Nov 2025 10:12:08 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.13
	IPs:
	  IP:  10.244.0.13
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pkznd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-pkznd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  7m5s                 default-scheduler  Successfully assigned default/sp-pod to functional-950389
	  Warning  Failed     3m51s                kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     59s (x2 over 5m43s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     59s (x3 over 5m43s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    21s (x5 over 5m42s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     21s (x5 over 5m42s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    9s (x4 over 7m)      kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-ljjhn" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-wm424" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-950389 describe pod busybox-mount hello-node-75c85bcc94-wws2s hello-node-connect-7d85dfc575-t7gtf sp-pod dashboard-metrics-scraper-77bf4d6c4c-ljjhn kubernetes-dashboard-855c9754f9-wm424: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-950389 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-950389 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-t7gtf" [e6aa5eba-2bc1-4f18-9d27-1e0bc284884d] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-950389 -n functional-950389
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-01 10:22:02.377765215 +0000 UTC m=+1937.119973213
functional_test.go:1645: (dbg) Run:  kubectl --context functional-950389 describe po hello-node-connect-7d85dfc575-t7gtf -n default
functional_test.go:1645: (dbg) kubectl --context functional-950389 describe po hello-node-connect-7d85dfc575-t7gtf -n default:
Name:             hello-node-connect-7d85dfc575-t7gtf
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-950389/192.168.39.40
Start Time:       Sat, 01 Nov 2025 10:12:02 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.12
IPs:
IP:           10.244.0.12
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b7w5p (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-b7w5p:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-t7gtf to functional-950389
Warning  Failed     4m19s (x2 over 9m3s)  kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    3m28s (x4 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     71s (x4 over 9m3s)    kubelet            Error: ErrImagePull
Warning  Failed     71s (x2 over 7m26s)   kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    6s (x10 over 9m3s)    kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     6s (x10 over 9m3s)    kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-950389 logs hello-node-connect-7d85dfc575-t7gtf -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-950389 logs hello-node-connect-7d85dfc575-t7gtf -n default: exit status 1 (74.486816ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-t7gtf" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-950389 logs hello-node-connect-7d85dfc575-t7gtf -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-950389 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-t7gtf
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-950389/192.168.39.40
Start Time:       Sat, 01 Nov 2025 10:12:02 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.12
IPs:
IP:           10.244.0.12
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b7w5p (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-b7w5p:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-t7gtf to functional-950389
Warning  Failed     4m19s (x2 over 9m3s)  kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    3m28s (x4 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     71s (x4 over 9m3s)    kubelet            Error: ErrImagePull
Warning  Failed     71s (x2 over 7m26s)   kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    6s (x10 over 9m3s)    kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     6s (x10 over 9m3s)    kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-950389 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-950389 logs -l app=hello-node-connect: exit status 1 (68.186525ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-t7gtf" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-950389 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-950389 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.100.78.168
IPs:                      10.100.78.168
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32400/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-950389 -n functional-950389
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-950389 logs -n 25: (1.630045612s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount          │ -p functional-950389 /tmp/TestFunctionalparallelMountCmdspecific-port3461040035/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │                     │
	│ ssh            │ functional-950389 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │                     │
	│ ssh            │ functional-950389 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │ 01 Nov 25 10:14 UTC │
	│ ssh            │ functional-950389 ssh -- ls -la /mount-9p                                                                                         │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │ 01 Nov 25 10:14 UTC │
	│ ssh            │ functional-950389 ssh sudo umount -f /mount-9p                                                                                    │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │                     │
	│ mount          │ -p functional-950389 /tmp/TestFunctionalparallelMountCmdVerifyCleanup996814724/001:/mount1 --alsologtostderr -v=1                 │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │                     │
	│ mount          │ -p functional-950389 /tmp/TestFunctionalparallelMountCmdVerifyCleanup996814724/001:/mount2 --alsologtostderr -v=1                 │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │                     │
	│ mount          │ -p functional-950389 /tmp/TestFunctionalparallelMountCmdVerifyCleanup996814724/001:/mount3 --alsologtostderr -v=1                 │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │                     │
	│ ssh            │ functional-950389 ssh findmnt -T /mount1                                                                                          │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │                     │
	│ ssh            │ functional-950389 ssh findmnt -T /mount1                                                                                          │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │ 01 Nov 25 10:14 UTC │
	│ ssh            │ functional-950389 ssh findmnt -T /mount2                                                                                          │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │ 01 Nov 25 10:14 UTC │
	│ ssh            │ functional-950389 ssh findmnt -T /mount3                                                                                          │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │ 01 Nov 25 10:14 UTC │
	│ mount          │ -p functional-950389 --kill=true                                                                                                  │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-950389 --alsologtostderr -v=1                                                                    │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │                     │
	│ start          │ -p functional-950389 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio                           │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │                     │
	│ update-context │ functional-950389 update-context --alsologtostderr -v=2                                                                           │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │ 01 Nov 25 10:18 UTC │
	│ update-context │ functional-950389 update-context --alsologtostderr -v=2                                                                           │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │ 01 Nov 25 10:18 UTC │
	│ update-context │ functional-950389 update-context --alsologtostderr -v=2                                                                           │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │ 01 Nov 25 10:18 UTC │
	│ image          │ functional-950389 image ls --format short --alsologtostderr                                                                       │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │ 01 Nov 25 10:18 UTC │
	│ image          │ functional-950389 image ls --format yaml --alsologtostderr                                                                        │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │ 01 Nov 25 10:18 UTC │
	│ ssh            │ functional-950389 ssh pgrep buildkitd                                                                                             │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │                     │
	│ image          │ functional-950389 image build -t localhost/my-image:functional-950389 testdata/build --alsologtostderr                            │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │ 01 Nov 25 10:18 UTC │
	│ image          │ functional-950389 image ls                                                                                                        │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │ 01 Nov 25 10:18 UTC │
	│ image          │ functional-950389 image ls --format json --alsologtostderr                                                                        │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │ 01 Nov 25 10:18 UTC │
	│ image          │ functional-950389 image ls --format table --alsologtostderr                                                                       │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │ 01 Nov 25 10:18 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:18:11
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:18:11.338627   83952 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:18:11.338735   83952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:18:11.338746   83952 out.go:374] Setting ErrFile to fd 2...
	I1101 10:18:11.338753   83952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:18:11.339054   83952 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-70113/.minikube/bin
	I1101 10:18:11.339498   83952 out.go:368] Setting JSON to false
	I1101 10:18:11.340456   83952 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":7239,"bootTime":1761985052,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:18:11.340559   83952 start.go:143] virtualization: kvm guest
	I1101 10:18:11.342408   83952 out.go:179] * [functional-950389] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1101 10:18:11.343782   83952 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 10:18:11.343812   83952 notify.go:221] Checking for updates...
	I1101 10:18:11.346339   83952 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:18:11.347789   83952 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-70113/kubeconfig
	I1101 10:18:11.348964   83952 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-70113/.minikube
	I1101 10:18:11.350180   83952 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:18:11.351488   83952 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:18:11.353388   83952 config.go:182] Loaded profile config "functional-950389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:18:11.354047   83952 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:18:11.385249   83952 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1101 10:18:11.386557   83952 start.go:309] selected driver: kvm2
	I1101 10:18:11.386576   83952 start.go:930] validating driver "kvm2" against &{Name:functional-950389 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-950389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.40 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:18:11.386679   83952 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:18:11.388574   83952 out.go:203] 
	W1101 10:18:11.389749   83952 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1101 10:18:11.390833   83952 out.go:203] 
	
	
	==> CRI-O <==
	Nov 01 10:22:03 functional-950389 crio[14025]: time="2025-11-01 10:22:03.436302761Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761992523436277397,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:220510,},InodesUsed:&UInt64Value{Value:108,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c0b1f624-ee56-46d2-8bb7-8429da33d79f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:22:03 functional-950389 crio[14025]: time="2025-11-01 10:22:03.437279265Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a7db33fb-96e0-4dda-bc11-3e796a7fb7a3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:22:03 functional-950389 crio[14025]: time="2025-11-01 10:22:03.437421492Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a7db33fb-96e0-4dda-bc11-3e796a7fb7a3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:22:03 functional-950389 crio[14025]: time="2025-11-01 10:22:03.437791641Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c2e6bc4021502c2f170d620789895f85434175172b836be9f091032da1f39fa8,PodSandboxId:cbcb2c689c2b5b1ca9aee13165b781ce089583dd6c1a35065fd259801c182c5a,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1761992045288650075,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60978975-8366-41aa-b97a-93a1c86afe6c,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5adf9564ca6356300d89661e1ffa64e5520291c9b1c46f0c7a2926733d15d16,PodSandboxId:6b974e48e60d25ac75bba8ab0213c66b917b8dbd93434957b6253c0360c22ff1,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1761991932929384265,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-5bb876957f-nnckx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dd6e3839-b3cf-4ec6-ad2c-a1c6d778799b,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"con
tainerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:774e7ac99a1544464667b47cbd5816a16cc6563f1fe95df87265068d45b7b5be,PodSandboxId:c72bccebda6e887772cc3ebb1c270fdd53869617cc542c8a94cc7aea17813bbf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761991895756692686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6ps9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a502e626-8a66-4687-9b76-053029dabdd6,},Annotations:map[string]string{io.kubernetes.container.hash:
e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c5bd32acfec70d052cccdfa7390d7b4719458133a155dea0720eaba86242817,PodSandboxId:1756d604cc85acc0c421877fbc05570e2e81d803392c957e624b52e6d6eff162,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5b
af0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761991895631754402,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5lfgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c9758ea-cd15-49e2-893c-e78ed7d30f55,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bea04e4d0bfb829facb73a10dc15c1004846f241b01c955bda0363f2503928b,PodSa
ndboxId:f63e3783b00886a154bf100a9ebb0b076d19678c123b9ffbc7dfc8f57e8c3606,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761991895284172691,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jtt6l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0c48c32-fe99-40bf-b651-b04105adec6b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:273ca0fdb8a91fa3e741bb6943c95a6dfb8153f908ad4f5a083bd2881a67b410,PodSandboxId:26487b7cf6878efb0c7f1e22a
323df3aa0e558454c7dda1e253950db7cf123de,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761991895219809116,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 437a47af-7662-481d-b1b7-09379f4069c9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:186f819959a1348c2af4e54006dff0c66c8117b0b9eb535946854568b1f2ca15,PodSandboxId:4761b5b9cfad0377a16669089b1cecc494620
1e1cea0b53be0485d7c076615b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761991891634539228,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832ac4e926fa9d3ad2ccc452d513f863,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58eb2
8533db8250617e51db9afe575300a277c682560c5c14a995c9b177e1dfc,PodSandboxId:88cad2727994a82f17a4c01e02294b6cf9ec60e5044d128765b3d40d437bb81c,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761991891641031515,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfd96429d7f7575fe65285b5903ca594,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePol
icy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5396a2c9c58870683b192a3fdf384b920eb701896ab62edc1c65efb081a3154,PodSandboxId:8a83bc2cd1d4b677af16c3abc1eaca1220ee5ff9436474a39929f997474b3120,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761991891584043691,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b43be368500d83ae97f6110abbf40e1,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e90b96d4e97286fd540c2d19181b1868c42d2dab4d68fa696bfc7108af3a7c77,PodSandboxId:139a554d695f05bb58bfd0e2ed05c5140d33cda05ed8689377af883672a72ae2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:8,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761991891565324561,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ef73b8d782106f4ce68a921abfa7e79,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [
{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2f46ea37f76ee460b92d3ba6f03808bc8727f53b3a42d9893fdf5c315adca62,PodSandboxId:68f5ead72a631362f93ee11bdee1d8769a1750005f9dc29ecaa9a9dda9915446,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761991853869284413,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 437a47af-7662-481d-b1b7-09379f
4069c9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d93163c734255a1ab1e2fb85b8e779b5d73e343d2f4346e8d26789c07b26e92,PodSandboxId:d9b89241d322c09b20ad49ff28f27f36d53287ba1b6d7a58950a03a0850382b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761991853344196655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5lfgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c9758ea-cd15-49e2-893c-e78ed7d30f55,},Annotations:map[string]
string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f54db03325aa1e112ef490697e276720ce6a9670cd42150d84459926faee346,PodSandboxId:6554bb887a4e46ae587ab3ce51abeef07f3943cca0a6a8c95ee7d822cf987118,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0
d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761991853156888745,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6ps9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a502e626-8a66-4687-9b76-053029dabdd6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62537ebd8ec8cfb85842acc4c972c6f4c2e
963731421d69f8dd4ef7d38a28f75,PodSandboxId:17b51286c59a8bc3eced4325f549d7a6976f82c95cf65ed363eff1520f95a85f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761991852423478827,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jtt6l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0c48c32-fe99-40bf-b651-b04105adec6b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f49d0ac915f874845d2f0bf26effc4af26f80c8bf498ef08d863d1ac072d8177,PodS
andboxId:2c85f617f556b93f76c665875b78b587e1dd47cee508740e24275ba1ea021bf9,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761991840821940954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfd96429d7f7575fe65285b5903ca594,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:35509da8a528e42be0f838e8fafb95aecf866da7c5e6b7a3463389d69257be53,PodSandboxId:1e98d8cc22231bc02a76b8b9c2987dbeb6c87d700ccceadb0052cc40b4b50f66,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:7,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761991840759609644,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ef73b8d782106f4ce68a921abfa7e79,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39af2ac349d69d2fd9b2dcd027c4c4e1bb63a93721f12f8f0dfc9945567c869c,PodSandboxId:a312f42dd95c1bbe80d4e02d4b83b3aebca6f86c6b1ee0e6490bd5c23dea3af7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761991840663345376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832ac4e926fa9d3ad2ccc452d513f863,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":102
59,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a7db33fb-96e0-4dda-bc11-3e796a7fb7a3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:22:03 functional-950389 crio[14025]: time="2025-11-01 10:22:03.489938533Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a87e74f5-6cfb-4aef-9ab9-64ee06f60acc name=/runtime.v1.RuntimeService/Version
	Nov 01 10:22:03 functional-950389 crio[14025]: time="2025-11-01 10:22:03.490025870Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a87e74f5-6cfb-4aef-9ab9-64ee06f60acc name=/runtime.v1.RuntimeService/Version
	Nov 01 10:22:03 functional-950389 crio[14025]: time="2025-11-01 10:22:03.491892911Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=56471f34-6e9c-49b4-b1d8-a6d70550abf7 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:22:03 functional-950389 crio[14025]: time="2025-11-01 10:22:03.492786449Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761992523492759907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:220510,},InodesUsed:&UInt64Value{Value:108,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=56471f34-6e9c-49b4-b1d8-a6d70550abf7 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:22:03 functional-950389 crio[14025]: time="2025-11-01 10:22:03.493391396Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=630f075c-a102-490b-abf3-a4bec8eaab27 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:22:03 functional-950389 crio[14025]: time="2025-11-01 10:22:03.493442250Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=630f075c-a102-490b-abf3-a4bec8eaab27 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:22:03 functional-950389 crio[14025]: time="2025-11-01 10:22:03.494662713Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c2e6bc4021502c2f170d620789895f85434175172b836be9f091032da1f39fa8,PodSandboxId:cbcb2c689c2b5b1ca9aee13165b781ce089583dd6c1a35065fd259801c182c5a,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1761992045288650075,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60978975-8366-41aa-b97a-93a1c86afe6c,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5adf9564ca6356300d89661e1ffa64e5520291c9b1c46f0c7a2926733d15d16,PodSandboxId:6b974e48e60d25ac75bba8ab0213c66b917b8dbd93434957b6253c0360c22ff1,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1761991932929384265,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-5bb876957f-nnckx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dd6e3839-b3cf-4ec6-ad2c-a1c6d778799b,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"con
tainerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:774e7ac99a1544464667b47cbd5816a16cc6563f1fe95df87265068d45b7b5be,PodSandboxId:c72bccebda6e887772cc3ebb1c270fdd53869617cc542c8a94cc7aea17813bbf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761991895756692686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6ps9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a502e626-8a66-4687-9b76-053029dabdd6,},Annotations:map[string]string{io.kubernetes.container.hash:
e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c5bd32acfec70d052cccdfa7390d7b4719458133a155dea0720eaba86242817,PodSandboxId:1756d604cc85acc0c421877fbc05570e2e81d803392c957e624b52e6d6eff162,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5b
af0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761991895631754402,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5lfgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c9758ea-cd15-49e2-893c-e78ed7d30f55,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bea04e4d0bfb829facb73a10dc15c1004846f241b01c955bda0363f2503928b,PodSa
ndboxId:f63e3783b00886a154bf100a9ebb0b076d19678c123b9ffbc7dfc8f57e8c3606,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761991895284172691,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jtt6l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0c48c32-fe99-40bf-b651-b04105adec6b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:273ca0fdb8a91fa3e741bb6943c95a6dfb8153f908ad4f5a083bd2881a67b410,PodSandboxId:26487b7cf6878efb0c7f1e22a
323df3aa0e558454c7dda1e253950db7cf123de,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761991895219809116,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 437a47af-7662-481d-b1b7-09379f4069c9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:186f819959a1348c2af4e54006dff0c66c8117b0b9eb535946854568b1f2ca15,PodSandboxId:4761b5b9cfad0377a16669089b1cecc494620
1e1cea0b53be0485d7c076615b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761991891634539228,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832ac4e926fa9d3ad2ccc452d513f863,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58eb2
8533db8250617e51db9afe575300a277c682560c5c14a995c9b177e1dfc,PodSandboxId:88cad2727994a82f17a4c01e02294b6cf9ec60e5044d128765b3d40d437bb81c,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761991891641031515,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfd96429d7f7575fe65285b5903ca594,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePol
icy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5396a2c9c58870683b192a3fdf384b920eb701896ab62edc1c65efb081a3154,PodSandboxId:8a83bc2cd1d4b677af16c3abc1eaca1220ee5ff9436474a39929f997474b3120,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761991891584043691,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b43be368500d83ae97f6110abbf40e1,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e90b96d4e97286fd540c2d19181b1868c42d2dab4d68fa696bfc7108af3a7c77,PodSandboxId:139a554d695f05bb58bfd0e2ed05c5140d33cda05ed8689377af883672a72ae2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:8,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761991891565324561,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ef73b8d782106f4ce68a921abfa7e79,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [
{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2f46ea37f76ee460b92d3ba6f03808bc8727f53b3a42d9893fdf5c315adca62,PodSandboxId:68f5ead72a631362f93ee11bdee1d8769a1750005f9dc29ecaa9a9dda9915446,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761991853869284413,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 437a47af-7662-481d-b1b7-09379f
4069c9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d93163c734255a1ab1e2fb85b8e779b5d73e343d2f4346e8d26789c07b26e92,PodSandboxId:d9b89241d322c09b20ad49ff28f27f36d53287ba1b6d7a58950a03a0850382b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761991853344196655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5lfgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c9758ea-cd15-49e2-893c-e78ed7d30f55,},Annotations:map[string]
string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f54db03325aa1e112ef490697e276720ce6a9670cd42150d84459926faee346,PodSandboxId:6554bb887a4e46ae587ab3ce51abeef07f3943cca0a6a8c95ee7d822cf987118,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0
d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761991853156888745,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6ps9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a502e626-8a66-4687-9b76-053029dabdd6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62537ebd8ec8cfb85842acc4c972c6f4c2e
963731421d69f8dd4ef7d38a28f75,PodSandboxId:17b51286c59a8bc3eced4325f549d7a6976f82c95cf65ed363eff1520f95a85f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761991852423478827,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jtt6l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0c48c32-fe99-40bf-b651-b04105adec6b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f49d0ac915f874845d2f0bf26effc4af26f80c8bf498ef08d863d1ac072d8177,PodS
andboxId:2c85f617f556b93f76c665875b78b587e1dd47cee508740e24275ba1ea021bf9,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761991840821940954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfd96429d7f7575fe65285b5903ca594,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:35509da8a528e42be0f838e8fafb95aecf866da7c5e6b7a3463389d69257be53,PodSandboxId:1e98d8cc22231bc02a76b8b9c2987dbeb6c87d700ccceadb0052cc40b4b50f66,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:7,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761991840759609644,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ef73b8d782106f4ce68a921abfa7e79,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39af2ac349d69d2fd9b2dcd027c4c4e1bb63a93721f12f8f0dfc9945567c869c,PodSandboxId:a312f42dd95c1bbe80d4e02d4b83b3aebca6f86c6b1ee0e6490bd5c23dea3af7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761991840663345376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832ac4e926fa9d3ad2ccc452d513f863,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":102
59,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=630f075c-a102-490b-abf3-a4bec8eaab27 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:22:03 functional-950389 crio[14025]: time="2025-11-01 10:22:03.532810127Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ac9ae4ed-15cd-4b35-b65a-6326c2aa50e4 name=/runtime.v1.RuntimeService/Version
	Nov 01 10:22:03 functional-950389 crio[14025]: time="2025-11-01 10:22:03.533404917Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ac9ae4ed-15cd-4b35-b65a-6326c2aa50e4 name=/runtime.v1.RuntimeService/Version
	Nov 01 10:22:03 functional-950389 crio[14025]: time="2025-11-01 10:22:03.534496394Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3e35d15e-db5f-4e6a-9ee6-bbccc41cbf06 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:22:03 functional-950389 crio[14025]: time="2025-11-01 10:22:03.536036588Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761992523535924364,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:220510,},InodesUsed:&UInt64Value{Value:108,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3e35d15e-db5f-4e6a-9ee6-bbccc41cbf06 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:22:03 functional-950389 crio[14025]: time="2025-11-01 10:22:03.536711319Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=39923aab-9662-4d81-b696-58a8ddcf72e9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:22:03 functional-950389 crio[14025]: time="2025-11-01 10:22:03.536808308Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=39923aab-9662-4d81-b696-58a8ddcf72e9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:22:03 functional-950389 crio[14025]: time="2025-11-01 10:22:03.537228837Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c2e6bc4021502c2f170d620789895f85434175172b836be9f091032da1f39fa8,PodSandboxId:cbcb2c689c2b5b1ca9aee13165b781ce089583dd6c1a35065fd259801c182c5a,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1761992045288650075,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60978975-8366-41aa-b97a-93a1c86afe6c,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5adf9564ca6356300d89661e1ffa64e5520291c9b1c46f0c7a2926733d15d16,PodSandboxId:6b974e48e60d25ac75bba8ab0213c66b917b8dbd93434957b6253c0360c22ff1,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1761991932929384265,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-5bb876957f-nnckx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dd6e3839-b3cf-4ec6-ad2c-a1c6d778799b,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"con
tainerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:774e7ac99a1544464667b47cbd5816a16cc6563f1fe95df87265068d45b7b5be,PodSandboxId:c72bccebda6e887772cc3ebb1c270fdd53869617cc542c8a94cc7aea17813bbf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761991895756692686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6ps9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a502e626-8a66-4687-9b76-053029dabdd6,},Annotations:map[string]string{io.kubernetes.container.hash:
e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c5bd32acfec70d052cccdfa7390d7b4719458133a155dea0720eaba86242817,PodSandboxId:1756d604cc85acc0c421877fbc05570e2e81d803392c957e624b52e6d6eff162,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5b
af0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761991895631754402,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5lfgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c9758ea-cd15-49e2-893c-e78ed7d30f55,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bea04e4d0bfb829facb73a10dc15c1004846f241b01c955bda0363f2503928b,PodSa
ndboxId:f63e3783b00886a154bf100a9ebb0b076d19678c123b9ffbc7dfc8f57e8c3606,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761991895284172691,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jtt6l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0c48c32-fe99-40bf-b651-b04105adec6b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:273ca0fdb8a91fa3e741bb6943c95a6dfb8153f908ad4f5a083bd2881a67b410,PodSandboxId:26487b7cf6878efb0c7f1e22a
323df3aa0e558454c7dda1e253950db7cf123de,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761991895219809116,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 437a47af-7662-481d-b1b7-09379f4069c9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:186f819959a1348c2af4e54006dff0c66c8117b0b9eb535946854568b1f2ca15,PodSandboxId:4761b5b9cfad0377a16669089b1cecc494620
1e1cea0b53be0485d7c076615b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761991891634539228,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832ac4e926fa9d3ad2ccc452d513f863,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58eb2
8533db8250617e51db9afe575300a277c682560c5c14a995c9b177e1dfc,PodSandboxId:88cad2727994a82f17a4c01e02294b6cf9ec60e5044d128765b3d40d437bb81c,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761991891641031515,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfd96429d7f7575fe65285b5903ca594,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePol
icy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5396a2c9c58870683b192a3fdf384b920eb701896ab62edc1c65efb081a3154,PodSandboxId:8a83bc2cd1d4b677af16c3abc1eaca1220ee5ff9436474a39929f997474b3120,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761991891584043691,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b43be368500d83ae97f6110abbf40e1,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e90b96d4e97286fd540c2d19181b1868c42d2dab4d68fa696bfc7108af3a7c77,PodSandboxId:139a554d695f05bb58bfd0e2ed05c5140d33cda05ed8689377af883672a72ae2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:8,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761991891565324561,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ef73b8d782106f4ce68a921abfa7e79,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [
{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2f46ea37f76ee460b92d3ba6f03808bc8727f53b3a42d9893fdf5c315adca62,PodSandboxId:68f5ead72a631362f93ee11bdee1d8769a1750005f9dc29ecaa9a9dda9915446,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761991853869284413,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 437a47af-7662-481d-b1b7-09379f
4069c9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d93163c734255a1ab1e2fb85b8e779b5d73e343d2f4346e8d26789c07b26e92,PodSandboxId:d9b89241d322c09b20ad49ff28f27f36d53287ba1b6d7a58950a03a0850382b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761991853344196655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5lfgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c9758ea-cd15-49e2-893c-e78ed7d30f55,},Annotations:map[string]
string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f54db03325aa1e112ef490697e276720ce6a9670cd42150d84459926faee346,PodSandboxId:6554bb887a4e46ae587ab3ce51abeef07f3943cca0a6a8c95ee7d822cf987118,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0
d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761991853156888745,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6ps9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a502e626-8a66-4687-9b76-053029dabdd6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62537ebd8ec8cfb85842acc4c972c6f4c2e
963731421d69f8dd4ef7d38a28f75,PodSandboxId:17b51286c59a8bc3eced4325f549d7a6976f82c95cf65ed363eff1520f95a85f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761991852423478827,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jtt6l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0c48c32-fe99-40bf-b651-b04105adec6b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f49d0ac915f874845d2f0bf26effc4af26f80c8bf498ef08d863d1ac072d8177,PodS
andboxId:2c85f617f556b93f76c665875b78b587e1dd47cee508740e24275ba1ea021bf9,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761991840821940954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfd96429d7f7575fe65285b5903ca594,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:35509da8a528e42be0f838e8fafb95aecf866da7c5e6b7a3463389d69257be53,PodSandboxId:1e98d8cc22231bc02a76b8b9c2987dbeb6c87d700ccceadb0052cc40b4b50f66,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:7,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761991840759609644,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ef73b8d782106f4ce68a921abfa7e79,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39af2ac349d69d2fd9b2dcd027c4c4e1bb63a93721f12f8f0dfc9945567c869c,PodSandboxId:a312f42dd95c1bbe80d4e02d4b83b3aebca6f86c6b1ee0e6490bd5c23dea3af7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761991840663345376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832ac4e926fa9d3ad2ccc452d513f863,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":102
59,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=39923aab-9662-4d81-b696-58a8ddcf72e9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:22:03 functional-950389 crio[14025]: time="2025-11-01 10:22:03.586471719Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cface76c-54f3-431d-a32b-d93e1106cbc9 name=/runtime.v1.RuntimeService/Version
	Nov 01 10:22:03 functional-950389 crio[14025]: time="2025-11-01 10:22:03.586544124Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cface76c-54f3-431d-a32b-d93e1106cbc9 name=/runtime.v1.RuntimeService/Version
	Nov 01 10:22:03 functional-950389 crio[14025]: time="2025-11-01 10:22:03.587925848Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=011b9b3c-da32-4312-aad4-3ce28d17b370 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:22:03 functional-950389 crio[14025]: time="2025-11-01 10:22:03.588642030Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761992523588617963,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:220510,},InodesUsed:&UInt64Value{Value:108,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=011b9b3c-da32-4312-aad4-3ce28d17b370 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:22:03 functional-950389 crio[14025]: time="2025-11-01 10:22:03.589243386Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=deeb90b6-b46b-477d-8827-e1aac41693e2 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:22:03 functional-950389 crio[14025]: time="2025-11-01 10:22:03.589321468Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=deeb90b6-b46b-477d-8827-e1aac41693e2 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:22:03 functional-950389 crio[14025]: time="2025-11-01 10:22:03.589629843Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c2e6bc4021502c2f170d620789895f85434175172b836be9f091032da1f39fa8,PodSandboxId:cbcb2c689c2b5b1ca9aee13165b781ce089583dd6c1a35065fd259801c182c5a,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1761992045288650075,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60978975-8366-41aa-b97a-93a1c86afe6c,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5adf9564ca6356300d89661e1ffa64e5520291c9b1c46f0c7a2926733d15d16,PodSandboxId:6b974e48e60d25ac75bba8ab0213c66b917b8dbd93434957b6253c0360c22ff1,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1761991932929384265,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-5bb876957f-nnckx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dd6e3839-b3cf-4ec6-ad2c-a1c6d778799b,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"con
tainerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:774e7ac99a1544464667b47cbd5816a16cc6563f1fe95df87265068d45b7b5be,PodSandboxId:c72bccebda6e887772cc3ebb1c270fdd53869617cc542c8a94cc7aea17813bbf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761991895756692686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6ps9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a502e626-8a66-4687-9b76-053029dabdd6,},Annotations:map[string]string{io.kubernetes.container.hash:
e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c5bd32acfec70d052cccdfa7390d7b4719458133a155dea0720eaba86242817,PodSandboxId:1756d604cc85acc0c421877fbc05570e2e81d803392c957e624b52e6d6eff162,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5b
af0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761991895631754402,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5lfgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c9758ea-cd15-49e2-893c-e78ed7d30f55,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bea04e4d0bfb829facb73a10dc15c1004846f241b01c955bda0363f2503928b,PodSa
ndboxId:f63e3783b00886a154bf100a9ebb0b076d19678c123b9ffbc7dfc8f57e8c3606,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761991895284172691,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jtt6l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0c48c32-fe99-40bf-b651-b04105adec6b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:273ca0fdb8a91fa3e741bb6943c95a6dfb8153f908ad4f5a083bd2881a67b410,PodSandboxId:26487b7cf6878efb0c7f1e22a
323df3aa0e558454c7dda1e253950db7cf123de,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761991895219809116,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 437a47af-7662-481d-b1b7-09379f4069c9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:186f819959a1348c2af4e54006dff0c66c8117b0b9eb535946854568b1f2ca15,PodSandboxId:4761b5b9cfad0377a16669089b1cecc494620
1e1cea0b53be0485d7c076615b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761991891634539228,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832ac4e926fa9d3ad2ccc452d513f863,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58eb2
8533db8250617e51db9afe575300a277c682560c5c14a995c9b177e1dfc,PodSandboxId:88cad2727994a82f17a4c01e02294b6cf9ec60e5044d128765b3d40d437bb81c,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761991891641031515,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfd96429d7f7575fe65285b5903ca594,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePol
icy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5396a2c9c58870683b192a3fdf384b920eb701896ab62edc1c65efb081a3154,PodSandboxId:8a83bc2cd1d4b677af16c3abc1eaca1220ee5ff9436474a39929f997474b3120,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761991891584043691,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b43be368500d83ae97f6110abbf40e1,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e90b96d4e97286fd540c2d19181b1868c42d2dab4d68fa696bfc7108af3a7c77,PodSandboxId:139a554d695f05bb58bfd0e2ed05c5140d33cda05ed8689377af883672a72ae2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:8,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761991891565324561,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ef73b8d782106f4ce68a921abfa7e79,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [
{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2f46ea37f76ee460b92d3ba6f03808bc8727f53b3a42d9893fdf5c315adca62,PodSandboxId:68f5ead72a631362f93ee11bdee1d8769a1750005f9dc29ecaa9a9dda9915446,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761991853869284413,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 437a47af-7662-481d-b1b7-09379f
4069c9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d93163c734255a1ab1e2fb85b8e779b5d73e343d2f4346e8d26789c07b26e92,PodSandboxId:d9b89241d322c09b20ad49ff28f27f36d53287ba1b6d7a58950a03a0850382b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761991853344196655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5lfgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c9758ea-cd15-49e2-893c-e78ed7d30f55,},Annotations:map[string]
string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f54db03325aa1e112ef490697e276720ce6a9670cd42150d84459926faee346,PodSandboxId:6554bb887a4e46ae587ab3ce51abeef07f3943cca0a6a8c95ee7d822cf987118,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0
d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761991853156888745,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6ps9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a502e626-8a66-4687-9b76-053029dabdd6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62537ebd8ec8cfb85842acc4c972c6f4c2e
963731421d69f8dd4ef7d38a28f75,PodSandboxId:17b51286c59a8bc3eced4325f549d7a6976f82c95cf65ed363eff1520f95a85f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761991852423478827,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jtt6l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0c48c32-fe99-40bf-b651-b04105adec6b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f49d0ac915f874845d2f0bf26effc4af26f80c8bf498ef08d863d1ac072d8177,PodS
andboxId:2c85f617f556b93f76c665875b78b587e1dd47cee508740e24275ba1ea021bf9,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761991840821940954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfd96429d7f7575fe65285b5903ca594,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:35509da8a528e42be0f838e8fafb95aecf866da7c5e6b7a3463389d69257be53,PodSandboxId:1e98d8cc22231bc02a76b8b9c2987dbeb6c87d700ccceadb0052cc40b4b50f66,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:7,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761991840759609644,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ef73b8d782106f4ce68a921abfa7e79,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39af2ac349d69d2fd9b2dcd027c4c4e1bb63a93721f12f8f0dfc9945567c869c,PodSandboxId:a312f42dd95c1bbe80d4e02d4b83b3aebca6f86c6b1ee0e6490bd5c23dea3af7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761991840663345376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832ac4e926fa9d3ad2ccc452d513f863,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":102
59,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=deeb90b6-b46b-477d-8827-e1aac41693e2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c2e6bc4021502       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   7 minutes ago       Exited              mount-munger              0                   cbcb2c689c2b5       busybox-mount
	a5adf9564ca63       docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb       9 minutes ago       Running             mysql                     0                   6b974e48e60d2       mysql-5bb876957f-nnckx
	774e7ac99a154       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      10 minutes ago      Running             coredns                   1                   c72bccebda6e8       coredns-66bc5c9577-6ps9z
	9c5bd32acfec7       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      10 minutes ago      Running             coredns                   1                   1756d604cc85a       coredns-66bc5c9577-5lfgh
	0bea04e4d0bfb       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      10 minutes ago      Running             kube-proxy                1                   f63e3783b0088       kube-proxy-jtt6l
	273ca0fdb8a91       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Running             storage-provisioner       1                   26487b7cf6878       storage-provisioner
	58eb28533db82       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      10 minutes ago      Running             etcd                      4                   88cad2727994a       etcd-functional-950389
	186f819959a13       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      10 minutes ago      Running             kube-scheduler            4                   4761b5b9cfad0       kube-scheduler-functional-950389
	b5396a2c9c588       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      10 minutes ago      Running             kube-apiserver            0                   8a83bc2cd1d4b       kube-apiserver-functional-950389
	e90b96d4e9728       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      10 minutes ago      Running             kube-controller-manager   8                   139a554d695f0       kube-controller-manager-functional-950389
	a2f46ea37f76e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 minutes ago      Exited              storage-provisioner       0                   68f5ead72a631       storage-provisioner
	8d93163c73425       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 minutes ago      Exited              coredns                   0                   d9b89241d322c       coredns-66bc5c9577-5lfgh
	9f54db03325aa       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 minutes ago      Exited              coredns                   0                   6554bb887a4e4       coredns-66bc5c9577-6ps9z
	62537ebd8ec8c       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      11 minutes ago      Exited              kube-proxy                0                   17b51286c59a8       kube-proxy-jtt6l
	f49d0ac915f87       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      11 minutes ago      Exited              etcd                      3                   2c85f617f556b       etcd-functional-950389
	35509da8a528e       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      11 minutes ago      Exited              kube-controller-manager   7                   1e98d8cc22231       kube-controller-manager-functional-950389
	39af2ac349d69       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      11 minutes ago      Exited              kube-scheduler            3                   a312f42dd95c1       kube-scheduler-functional-950389
	
	
	==> coredns [774e7ac99a1544464667b47cbd5816a16cc6563f1fe95df87265068d45b7b5be] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	
	
	==> coredns [8d93163c734255a1ab1e2fb85b8e779b5d73e343d2f4346e8d26789c07b26e92] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9c5bd32acfec70d052cccdfa7390d7b4719458133a155dea0720eaba86242817] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	
	
	==> coredns [9f54db03325aa1e112ef490697e276720ce6a9670cd42150d84459926faee346] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-950389
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-950389
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=functional-950389
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_10_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:10:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-950389
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:21:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:18:33 +0000   Sat, 01 Nov 2025 10:10:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:18:33 +0000   Sat, 01 Nov 2025 10:10:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:18:33 +0000   Sat, 01 Nov 2025 10:10:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:18:33 +0000   Sat, 01 Nov 2025 10:10:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.40
	  Hostname:    functional-950389
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 50219d7eb2434b54bfeb5a3ddfefd678
	  System UUID:                50219d7e-b243-4b54-bfeb-5a3ddfefd678
	  Boot ID:                    f9fafb52-9d25-4c51-b234-2193020a6a0b
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-wws2s                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m48s
	  default                     hello-node-connect-7d85dfc575-t7gtf           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-nnckx                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m55s
	  kube-system                 coredns-66bc5c9577-5lfgh                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     11m
	  kube-system                 coredns-66bc5c9577-6ps9z                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     11m
	  kube-system                 etcd-functional-950389                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         11m
	  kube-system                 kube-apiserver-functional-950389              250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-950389     200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-jtt6l                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-950389              100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-ljjhn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-wm424         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (72%)  700m (35%)
	  memory             752Mi (19%)  1040Mi (26%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x2 over 11m)  kubelet          Node functional-950389 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x2 over 11m)  kubelet          Node functional-950389 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x2 over 11m)  kubelet          Node functional-950389 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-950389 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-950389 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-950389 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           11m                node-controller  Node functional-950389 event: Registered Node functional-950389 in Controller
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-950389 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-950389 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-950389 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-950389 event: Registered Node functional-950389 in Controller
	
	
	==> dmesg <==
	[  +0.000074] kauditd_printk_skb: 18 callbacks suppressed
	[ +10.833497] kauditd_printk_skb: 249 callbacks suppressed
	[ +20.643469] kauditd_printk_skb: 38 callbacks suppressed
	[ +13.048600] kauditd_printk_skb: 11 callbacks suppressed
	[Nov 1 10:06] kauditd_printk_skb: 263 callbacks suppressed
	[ +13.559184] kauditd_printk_skb: 154 callbacks suppressed
	[Nov 1 10:07] kauditd_printk_skb: 5 callbacks suppressed
	[ +11.574466] kauditd_printk_skb: 5 callbacks suppressed
	[Nov 1 10:08] kauditd_printk_skb: 5 callbacks suppressed
	[Nov 1 10:10] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.100652] kauditd_printk_skb: 28 callbacks suppressed
	[  +0.156569] kauditd_printk_skb: 132 callbacks suppressed
	[  +0.226013] kauditd_printk_skb: 12 callbacks suppressed
	[Nov 1 10:11] kauditd_printk_skb: 170 callbacks suppressed
	[  +0.112112] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.976961] kauditd_printk_skb: 232 callbacks suppressed
	[  +4.355116] kauditd_printk_skb: 154 callbacks suppressed
	[ +18.317316] kauditd_printk_skb: 167 callbacks suppressed
	[Nov 1 10:12] kauditd_printk_skb: 97 callbacks suppressed
	[  +0.000026] kauditd_printk_skb: 80 callbacks suppressed
	[  +0.000138] kauditd_printk_skb: 95 callbacks suppressed
	[  +6.081767] kauditd_printk_skb: 26 callbacks suppressed
	[Nov 1 10:14] kauditd_printk_skb: 29 callbacks suppressed
	[Nov 1 10:15] kauditd_printk_skb: 68 callbacks suppressed
	[Nov 1 10:18] crun[18408]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	
	
	==> etcd [58eb28533db8250617e51db9afe575300a277c682560c5c14a995c9b177e1dfc] <==
	{"level":"warn","ts":"2025-11-01T10:12:08.584011Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T10:12:08.223440Z","time spent":"360.166919ms","remote":"127.0.0.1:45660","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1934,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/default/sp-pod\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/default/sp-pod\" value_size:1897 >> failure:<>"}
	{"level":"info","ts":"2025-11-01T10:12:11.557197Z","caller":"traceutil/trace.go:172","msg":"trace[723015122] linearizableReadLoop","detail":"{readStateIndex:656; appliedIndex:656; }","duration":"429.403939ms","start":"2025-11-01T10:12:11.127776Z","end":"2025-11-01T10:12:11.557180Z","steps":["trace[723015122] 'read index received'  (duration: 429.399712ms)","trace[723015122] 'applied index is now lower than readState.Index'  (duration: 3.459µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T10:12:11.557472Z","caller":"traceutil/trace.go:172","msg":"trace[657617144] transaction","detail":"{read_only:false; response_revision:611; number_of_response:1; }","duration":"464.683385ms","start":"2025-11-01T10:12:11.092781Z","end":"2025-11-01T10:12:11.557464Z","steps":["trace[657617144] 'process raft request'  (duration: 464.558391ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:12:11.558571Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T10:12:11.092765Z","time spent":"465.635981ms","remote":"127.0.0.1:45614","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:610 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-11-01T10:12:11.557687Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"429.873539ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T10:12:11.560319Z","caller":"traceutil/trace.go:172","msg":"trace[866693380] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:611; }","duration":"432.534968ms","start":"2025-11-01T10:12:11.127773Z","end":"2025-11-01T10:12:11.560308Z","steps":["trace[866693380] 'agreement among raft nodes before linearized reading'  (duration: 429.857759ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:12:11.560351Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T10:12:11.127757Z","time spent":"432.58258ms","remote":"127.0.0.1:45660","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2025-11-01T10:12:14.164943Z","caller":"traceutil/trace.go:172","msg":"trace[762212931] transaction","detail":"{read_only:false; response_revision:617; number_of_response:1; }","duration":"217.662746ms","start":"2025-11-01T10:12:13.947270Z","end":"2025-11-01T10:12:14.164932Z","steps":["trace[762212931] 'process raft request'  (duration: 217.469939ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:12:20.244562Z","caller":"traceutil/trace.go:172","msg":"trace[666249141] linearizableReadLoop","detail":"{readStateIndex:693; appliedIndex:693; }","duration":"427.871999ms","start":"2025-11-01T10:12:19.816674Z","end":"2025-11-01T10:12:20.244546Z","steps":["trace[666249141] 'read index received'  (duration: 427.867439ms)","trace[666249141] 'applied index is now lower than readState.Index'  (duration: 3.794µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T10:12:20.244658Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"427.993311ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T10:12:20.244674Z","caller":"traceutil/trace.go:172","msg":"trace[1789657826] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:645; }","duration":"428.022762ms","start":"2025-11-01T10:12:19.816647Z","end":"2025-11-01T10:12:20.244669Z","steps":["trace[1789657826] 'agreement among raft nodes before linearized reading'  (duration: 427.978929ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:12:20.244757Z","caller":"traceutil/trace.go:172","msg":"trace[1443099741] transaction","detail":"{read_only:false; response_revision:646; number_of_response:1; }","duration":"622.159008ms","start":"2025-11-01T10:12:19.622588Z","end":"2025-11-01T10:12:20.244747Z","steps":["trace[1443099741] 'process raft request'  (duration: 622.037625ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:12:20.244849Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"406.619182ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T10:12:20.245561Z","caller":"traceutil/trace.go:172","msg":"trace[1986868595] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:646; }","duration":"407.332043ms","start":"2025-11-01T10:12:19.838220Z","end":"2025-11-01T10:12:20.245553Z","steps":["trace[1986868595] 'agreement among raft nodes before linearized reading'  (duration: 406.601192ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:12:20.245621Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T10:12:19.838199Z","time spent":"407.411542ms","remote":"127.0.0.1:45660","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-11-01T10:12:20.244878Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"117.365768ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T10:12:20.245875Z","caller":"traceutil/trace.go:172","msg":"trace[2012618942] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:646; }","duration":"118.354669ms","start":"2025-11-01T10:12:20.127509Z","end":"2025-11-01T10:12:20.245864Z","steps":["trace[2012618942] 'agreement among raft nodes before linearized reading'  (duration: 117.357716ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:12:20.244894Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"201.996816ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T10:12:20.246312Z","caller":"traceutil/trace.go:172","msg":"trace[524363206] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:646; }","duration":"203.322528ms","start":"2025-11-01T10:12:20.042894Z","end":"2025-11-01T10:12:20.246217Z","steps":["trace[524363206] 'agreement among raft nodes before linearized reading'  (duration: 201.992234ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:12:20.245252Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"279.558918ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T10:12:20.246654Z","caller":"traceutil/trace.go:172","msg":"trace[1402734132] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:646; }","duration":"280.961849ms","start":"2025-11-01T10:12:19.965685Z","end":"2025-11-01T10:12:20.246647Z","steps":["trace[1402734132] 'agreement among raft nodes before linearized reading'  (duration: 279.545002ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:12:20.246164Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T10:12:19.622571Z","time spent":"622.75989ms","remote":"127.0.0.1:45614","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:645 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-11-01T10:21:32.799232Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":977}
	{"level":"info","ts":"2025-11-01T10:21:32.809840Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":977,"took":"9.928852ms","hash":202016809,"current-db-size-bytes":3104768,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":3104768,"current-db-size-in-use":"3.1 MB"}
	{"level":"info","ts":"2025-11-01T10:21:32.809886Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":202016809,"revision":977,"compact-revision":-1}
	
	
	==> etcd [f49d0ac915f874845d2f0bf26effc4af26f80c8bf498ef08d863d1ac072d8177] <==
	{"level":"warn","ts":"2025-11-01T10:10:43.150416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:43.155518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:43.174317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:43.176931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:43.185561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:43.248134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57298","server-name":"","error":"EOF"}
	2025/11/01 10:10:46 WARNING: [core] [Server #3]grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2025-11-01T10:11:12.108518Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-01T10:11:12.108993Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-950389","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.40:2380"],"advertise-client-urls":["https://192.168.39.40:2379"]}
	{"level":"error","ts":"2025-11-01T10:11:12.109280Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T10:11:12.202428Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T10:11:12.202509Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:11:12.202539Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"1088a855a4aa8d0a","current-leader-member-id":"1088a855a4aa8d0a"}
	{"level":"info","ts":"2025-11-01T10:11:12.202652Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-01T10:11:12.202687Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-01T10:11:12.202972Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T10:11:12.203180Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T10:11:12.203216Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-01T10:11:12.203272Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.40:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T10:11:12.203301Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.40:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T10:11:12.203309Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.40:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:11:12.206201Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.40:2380"}
	{"level":"error","ts":"2025-11-01T10:11:12.206290Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.40:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:11:12.206334Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.40:2380"}
	{"level":"info","ts":"2025-11-01T10:11:12.206372Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-950389","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.40:2380"],"advertise-client-urls":["https://192.168.39.40:2379"]}
	
	
	==> kernel <==
	 10:22:04 up 18 min,  0 users,  load average: 0.37, 0.34, 0.30
	Linux functional-950389 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Oct 28 16:58:05 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [b5396a2c9c58870683b192a3fdf384b920eb701896ab62edc1c65efb081a3154] <==
	I1101 10:11:34.679471       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 10:11:34.674692       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 10:11:34.693006       1 policy_source.go:240] refreshing policies
	I1101 10:11:34.693738       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:11:34.716576       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1101 10:11:34.744709       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 10:11:35.480463       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:11:36.463365       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:11:36.515464       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 10:11:36.570550       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:11:36.583288       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:11:37.941140       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:11:38.189636       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:11:55.076042       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.104.65.198"}
	I1101 10:11:59.787637       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.107.182.210"}
	I1101 10:11:59.833818       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:12:02.120377       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.100.78.168"}
	I1101 10:12:16.031264       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.111.83.65"}
	E1101 10:12:20.371618       1 conn.go:339] Error on socket receive: read tcp 192.168.39.40:8441->192.168.39.1:47384: use of closed network connection
	E1101 10:12:21.465827       1 conn.go:339] Error on socket receive: read tcp 192.168.39.40:8441->192.168.39.1:47396: use of closed network connection
	E1101 10:12:23.108237       1 conn.go:339] Error on socket receive: read tcp 192.168.39.40:8441->192.168.39.1:47410: use of closed network connection
	I1101 10:14:12.065414       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:14:12.302789       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.234.201"}
	I1101 10:14:12.371558       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.63.107"}
	I1101 10:21:34.589487       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [35509da8a528e42be0f838e8fafb95aecf866da7c5e6b7a3463389d69257be53] <==
	I1101 10:10:50.913409       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 10:10:50.913448       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 10:10:50.913471       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 10:10:50.913519       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 10:10:50.913525       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 10:10:50.913881       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1101 10:10:50.926331       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 10:10:50.929618       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="functional-950389" podCIDRs=["10.244.0.0/24"]
	I1101 10:10:50.941302       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:10:50.942033       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:10:50.944162       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:10:50.952149       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 10:10:50.952184       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 10:10:50.953662       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 10:10:50.953913       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 10:10:50.955705       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 10:10:50.955023       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 10:10:50.956044       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 10:10:50.955035       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 10:10:50.955053       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 10:10:50.956890       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 10:10:50.956977       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 10:10:50.957028       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-950389"
	I1101 10:10:50.957054       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 10:10:50.959752       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	
	
	==> kube-controller-manager [e90b96d4e97286fd540c2d19181b1868c42d2dab4d68fa696bfc7108af3a7c77] <==
	I1101 10:11:37.986823       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 10:11:37.986886       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 10:11:37.986947       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 10:11:37.986998       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 10:11:37.989290       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 10:11:37.990575       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 10:11:37.993905       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 10:11:37.993951       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 10:11:37.994110       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 10:11:37.994161       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-950389"
	I1101 10:11:37.994218       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 10:11:37.998608       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 10:11:38.005600       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1101 10:11:38.006324       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 10:11:38.013038       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 10:11:38.016501       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	E1101 10:14:12.164598       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 10:14:12.179617       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 10:14:12.185434       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 10:14:12.191012       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 10:14:12.200187       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 10:14:12.204924       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 10:14:12.208759       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 10:14:12.223004       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 10:14:12.223880       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [0bea04e4d0bfb829facb73a10dc15c1004846f241b01c955bda0363f2503928b] <==
	I1101 10:11:36.117752       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:11:36.219368       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:11:36.219424       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.40"]
	E1101 10:11:36.219530       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:11:36.305697       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1101 10:11:36.305750       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 10:11:36.305787       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:11:36.338703       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:11:36.339488       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:11:36.339577       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:11:36.348605       1 config.go:200] "Starting service config controller"
	I1101 10:11:36.348887       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:11:36.350863       1 config.go:309] "Starting node config controller"
	I1101 10:11:36.353887       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:11:36.353899       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:11:36.352809       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:11:36.353905       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:11:36.352796       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:11:36.362327       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:11:36.449909       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:11:36.455219       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:11:36.465605       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [62537ebd8ec8cfb85842acc4c972c6f4c2e963731421d69f8dd4ef7d38a28f75] <==
	I1101 10:10:52.788695       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:10:52.892404       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:10:52.892608       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.40"]
	E1101 10:10:52.893113       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:10:53.042226       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1101 10:10:53.042394       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 10:10:53.042428       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:10:53.079282       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:10:53.082808       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:10:53.082825       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:10:53.089905       1 config.go:200] "Starting service config controller"
	I1101 10:10:53.089918       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:10:53.089930       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:10:53.089933       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:10:53.089940       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:10:53.089944       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:10:53.097801       1 config.go:309] "Starting node config controller"
	I1101 10:10:53.097813       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:10:53.097819       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:10:53.190556       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:10:53.190589       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:10:53.190606       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [186f819959a1348c2af4e54006dff0c66c8117b0b9eb535946854568b1f2ca15] <==
	I1101 10:11:33.157776       1 serving.go:386] Generated self-signed cert in-memory
	W1101 10:11:34.552390       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 10:11:34.552487       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 10:11:34.552513       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 10:11:34.552535       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 10:11:34.599177       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:11:34.599216       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:11:34.604231       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:11:34.604272       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:11:34.606242       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:11:34.606469       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:11:34.706285       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [39af2ac349d69d2fd9b2dcd027c4c4e1bb63a93721f12f8f0dfc9945567c869c] <==
	E1101 10:10:44.508024       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 10:10:44.508189       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 10:10:44.508313       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 10:10:44.508437       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 10:10:44.508575       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 10:10:44.508764       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 10:10:44.508894       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 10:10:44.509295       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 10:10:44.509651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 10:10:44.509662       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 10:10:44.509740       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 10:10:44.509795       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 10:10:44.509871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 10:10:44.510191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 10:10:44.510224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:10:44.510349       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 10:10:45.312670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 10:10:45.669724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1101 10:10:48.397806       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:11:12.107394       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1101 10:11:12.107447       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1101 10:11:12.114303       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1101 10:11:12.114823       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:11:12.118329       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1101 10:11:12.118374       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 01 10:21:29 functional-950389 kubelet[14397]: E1101 10:21:29.707728   14397 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-t7gtf" podUID="e6aa5eba-2bc1-4f18-9d27-1e0bc284884d"
	Nov 01 10:21:30 functional-950389 kubelet[14397]: E1101 10:21:30.811413   14397 manager.go:1116] Failed to create existing container: /kubepods/burstable/podcfd96429d7f7575fe65285b5903ca594/crio-2c85f617f556b93f76c665875b78b587e1dd47cee508740e24275ba1ea021bf9: Error finding container 2c85f617f556b93f76c665875b78b587e1dd47cee508740e24275ba1ea021bf9: Status 404 returned error can't find the container with id 2c85f617f556b93f76c665875b78b587e1dd47cee508740e24275ba1ea021bf9
	Nov 01 10:21:30 functional-950389 kubelet[14397]: E1101 10:21:30.811758   14397 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod5ef73b8d782106f4ce68a921abfa7e79/crio-1e98d8cc22231bc02a76b8b9c2987dbeb6c87d700ccceadb0052cc40b4b50f66: Error finding container 1e98d8cc22231bc02a76b8b9c2987dbeb6c87d700ccceadb0052cc40b4b50f66: Status 404 returned error can't find the container with id 1e98d8cc22231bc02a76b8b9c2987dbeb6c87d700ccceadb0052cc40b4b50f66
	Nov 01 10:21:30 functional-950389 kubelet[14397]: E1101 10:21:30.812213   14397 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod437a47af-7662-481d-b1b7-09379f4069c9/crio-68f5ead72a631362f93ee11bdee1d8769a1750005f9dc29ecaa9a9dda9915446: Error finding container 68f5ead72a631362f93ee11bdee1d8769a1750005f9dc29ecaa9a9dda9915446: Status 404 returned error can't find the container with id 68f5ead72a631362f93ee11bdee1d8769a1750005f9dc29ecaa9a9dda9915446
	Nov 01 10:21:30 functional-950389 kubelet[14397]: E1101 10:21:30.812963   14397 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod832ac4e926fa9d3ad2ccc452d513f863/crio-a312f42dd95c1bbe80d4e02d4b83b3aebca6f86c6b1ee0e6490bd5c23dea3af7: Error finding container a312f42dd95c1bbe80d4e02d4b83b3aebca6f86c6b1ee0e6490bd5c23dea3af7: Status 404 returned error can't find the container with id a312f42dd95c1bbe80d4e02d4b83b3aebca6f86c6b1ee0e6490bd5c23dea3af7
	Nov 01 10:21:30 functional-950389 kubelet[14397]: E1101 10:21:30.813445   14397 manager.go:1116] Failed to create existing container: /kubepods/besteffort/poda0c48c32-fe99-40bf-b651-b04105adec6b/crio-17b51286c59a8bc3eced4325f549d7a6976f82c95cf65ed363eff1520f95a85f: Error finding container 17b51286c59a8bc3eced4325f549d7a6976f82c95cf65ed363eff1520f95a85f: Status 404 returned error can't find the container with id 17b51286c59a8bc3eced4325f549d7a6976f82c95cf65ed363eff1520f95a85f
	Nov 01 10:21:30 functional-950389 kubelet[14397]: E1101 10:21:30.813702   14397 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod7c9758ea-cd15-49e2-893c-e78ed7d30f55/crio-d9b89241d322c09b20ad49ff28f27f36d53287ba1b6d7a58950a03a0850382b5: Error finding container d9b89241d322c09b20ad49ff28f27f36d53287ba1b6d7a58950a03a0850382b5: Status 404 returned error can't find the container with id d9b89241d322c09b20ad49ff28f27f36d53287ba1b6d7a58950a03a0850382b5
	Nov 01 10:21:30 functional-950389 kubelet[14397]: E1101 10:21:30.814316   14397 manager.go:1116] Failed to create existing container: /kubepods/burstable/poda502e626-8a66-4687-9b76-053029dabdd6/crio-6554bb887a4e46ae587ab3ce51abeef07f3943cca0a6a8c95ee7d822cf987118: Error finding container 6554bb887a4e46ae587ab3ce51abeef07f3943cca0a6a8c95ee7d822cf987118: Status 404 returned error can't find the container with id 6554bb887a4e46ae587ab3ce51abeef07f3943cca0a6a8c95ee7d822cf987118
	Nov 01 10:21:31 functional-950389 kubelet[14397]: E1101 10:21:31.009681   14397 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761992491007029236  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:220510}  inodes_used:{value:108}}"
	Nov 01 10:21:31 functional-950389 kubelet[14397]: E1101 10:21:31.009730   14397 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761992491007029236  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:220510}  inodes_used:{value:108}}"
	Nov 01 10:21:35 functional-950389 kubelet[14397]: E1101 10:21:35.707758   14397 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="9629694c-f849-48d0-8099-8989879acb4b"
	Nov 01 10:21:41 functional-950389 kubelet[14397]: E1101 10:21:41.012118   14397 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761992501011376064  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:220510}  inodes_used:{value:108}}"
	Nov 01 10:21:41 functional-950389 kubelet[14397]: E1101 10:21:41.012166   14397 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761992501011376064  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:220510}  inodes_used:{value:108}}"
	Nov 01 10:21:43 functional-950389 kubelet[14397]: E1101 10:21:43.707757   14397 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-t7gtf" podUID="e6aa5eba-2bc1-4f18-9d27-1e0bc284884d"
	Nov 01 10:21:46 functional-950389 kubelet[14397]: E1101 10:21:46.709042   14397 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="9629694c-f849-48d0-8099-8989879acb4b"
	Nov 01 10:21:51 functional-950389 kubelet[14397]: E1101 10:21:51.014495   14397 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761992511013764564  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:220510}  inodes_used:{value:108}}"
	Nov 01 10:21:51 functional-950389 kubelet[14397]: E1101 10:21:51.014677   14397 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761992511013764564  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:220510}  inodes_used:{value:108}}"
	Nov 01 10:21:53 functional-950389 kubelet[14397]: E1101 10:21:53.873451   14397 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Nov 01 10:21:53 functional-950389 kubelet[14397]: E1101 10:21:53.873525   14397 kuberuntime_image.go:43] "Failed to pull image" err="reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Nov 01 10:21:53 functional-950389 kubelet[14397]: E1101 10:21:53.873754   14397 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-wm424_kubernetes-dashboard(6078d1e3-19c1-4501-9499-752f92e11376): ErrImagePull: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 01 10:21:53 functional-950389 kubelet[14397]: E1101 10:21:53.873798   14397 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wm424" podUID="6078d1e3-19c1-4501-9499-752f92e11376"
	Nov 01 10:21:56 functional-950389 kubelet[14397]: E1101 10:21:56.707508   14397 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-t7gtf" podUID="e6aa5eba-2bc1-4f18-9d27-1e0bc284884d"
	Nov 01 10:22:00 functional-950389 kubelet[14397]: E1101 10:22:00.707991   14397 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="9629694c-f849-48d0-8099-8989879acb4b"
	Nov 01 10:22:01 functional-950389 kubelet[14397]: E1101 10:22:01.019329   14397 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761992521018775295  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:220510}  inodes_used:{value:108}}"
	Nov 01 10:22:01 functional-950389 kubelet[14397]: E1101 10:22:01.019370   14397 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761992521018775295  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:220510}  inodes_used:{value:108}}"
	
	
	==> storage-provisioner [273ca0fdb8a91fa3e741bb6943c95a6dfb8153f908ad4f5a083bd2881a67b410] <==
	W1101 10:21:39.328815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:41.332431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:41.341704       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:43.345641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:43.351319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:45.354932       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:45.360959       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:47.364420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:47.369050       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:49.372974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:49.380803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:51.383909       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:51.388970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:53.393780       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:53.399746       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:55.403315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:55.409243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:57.413711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:57.423986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:59.426934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:21:59.432008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:01.435441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:01.445051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:03.448591       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:22:03.458819       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [a2f46ea37f76ee460b92d3ba6f03808bc8727f53b3a42d9893fdf5c315adca62] <==
	W1101 10:10:53.971607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:10:53.971757       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:10:53.972439       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cc693950-c77e-4542-ade3-eb86356b8127", APIVersion:"v1", ResourceVersion:"374", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-950389_edc0ea9e-5089-482e-a4f3-2ad82dd73b48 became leader
	I1101 10:10:53.972524       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-950389_edc0ea9e-5089-482e-a4f3-2ad82dd73b48!
	W1101 10:10:53.976036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:10:53.986630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:10:54.073762       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-950389_edc0ea9e-5089-482e-a4f3-2ad82dd73b48!
	W1101 10:10:55.990039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:10:55.995479       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:10:57.999172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:10:58.005427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:11:00.008624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:11:00.014243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:11:02.018143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:11:02.022795       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:11:04.030153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:11:04.037311       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:11:06.041900       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:11:06.047277       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:11:08.051696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:11:08.062794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:11:10.069361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:11:10.086725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:11:12.091000       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:11:12.096762       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-950389 -n functional-950389
helpers_test.go:269: (dbg) Run:  kubectl --context functional-950389 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-wws2s hello-node-connect-7d85dfc575-t7gtf sp-pod dashboard-metrics-scraper-77bf4d6c4c-ljjhn kubernetes-dashboard-855c9754f9-wm424
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-950389 describe pod busybox-mount hello-node-75c85bcc94-wws2s hello-node-connect-7d85dfc575-t7gtf sp-pod dashboard-metrics-scraper-77bf4d6c4c-ljjhn kubernetes-dashboard-855c9754f9-wm424
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-950389 describe pod busybox-mount hello-node-75c85bcc94-wws2s hello-node-connect-7d85dfc575-t7gtf sp-pod dashboard-metrics-scraper-77bf4d6c4c-ljjhn kubernetes-dashboard-855c9754f9-wm424: exit status 1 (98.789204ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-950389/192.168.39.40
	Start Time:       Sat, 01 Nov 2025 10:12:24 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.15
	IPs:
	  IP:  10.244.0.15
	Containers:
	  mount-munger:
	    Container ID:  cri-o://c2e6bc4021502c2f170d620789895f85434175172b836be9f091032da1f39fa8
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 01 Nov 2025 10:14:05 +0000
	      Finished:     Sat, 01 Nov 2025 10:14:05 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2nqsv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-2nqsv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m40s  default-scheduler  Successfully assigned default/busybox-mount to functional-950389
	  Normal  Pulling    9m39s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     7m59s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 3.385s (1m40.236s including waiting). Image size: 4631262 bytes.
	  Normal  Created    7m59s  kubelet            Created container: mount-munger
	  Normal  Started    7m59s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-wws2s
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-950389/192.168.39.40
	Start Time:       Sat, 01 Nov 2025 10:12:15 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.14
	IPs:
	  IP:           10.244.0.14
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zqfqw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-zqfqw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  9m49s                default-scheduler  Successfully assigned default/hello-node-75c85bcc94-wws2s to functional-950389
	  Warning  Failed     5m7s (x2 over 8m3s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     105s (x3 over 8m3s)  kubelet            Error: ErrImagePull
	  Warning  Failed     105s                 kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    69s (x5 over 8m2s)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     69s (x5 over 8m2s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    58s (x4 over 9m48s)  kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-t7gtf
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-950389/192.168.39.40
	Start Time:       Sat, 01 Nov 2025 10:12:02 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.12
	IPs:
	  IP:           10.244.0.12
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b7w5p (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-b7w5p:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-t7gtf to functional-950389
	  Warning  Failed     4m21s (x2 over 9m5s)  kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m30s (x4 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     73s (x4 over 9m5s)    kubelet            Error: ErrImagePull
	  Warning  Failed     73s (x2 over 7m28s)   kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    8s (x10 over 9m5s)    kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     8s (x10 over 9m5s)    kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-950389/192.168.39.40
	Start Time:       Sat, 01 Nov 2025 10:12:08 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.13
	IPs:
	  IP:  10.244.0.13
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pkznd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-pkznd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  9m57s                 default-scheduler  Successfully assigned default/sp-pod to functional-950389
	  Warning  Failed     6m43s                 kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m1s (x4 over 9m52s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     43s (x3 over 8m35s)   kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     43s (x4 over 8m35s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    5s (x8 over 8m34s)    kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     5s (x8 over 8m34s)    kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-ljjhn" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-wm424" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-950389 describe pod busybox-mount hello-node-75c85bcc94-wws2s hello-node-connect-7d85dfc575-t7gtf sp-pod dashboard-metrics-scraper-77bf4d6c4c-ljjhn kubernetes-dashboard-855c9754f9-wm424: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.07s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (370.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [437a47af-7662-481d-b1b7-09379f4069c9] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004885614s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-950389 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-950389 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-950389 get pvc myclaim -o=json
I1101 10:12:06.315231   73998 retry.go:31] will retry after 1.71890486s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:bf60887a-0726-4469-93e1-44d890ea2759 ResourceVersion:595 Generation:0 CreationTimestamp:2025-11-01 10:12:06 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001b6b4d0 VolumeMode:0xc001b6b4e0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-950389 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-950389 apply -f testdata/storage-provisioner/pod.yaml
I1101 10:12:08.603711   73998 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [9629694c-f849-48d0-8099-8989879acb4b] Pending
helpers_test.go:352: "sp-pod" [9629694c-f849-48d0-8099-8989879acb4b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-950389 -n functional-950389
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-11-01 10:18:08.838912486 +0000 UTC m=+1703.581120463
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-950389 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-950389 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-950389/192.168.39.40
Start Time:       Sat, 01 Nov 2025 10:12:08 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.13
IPs:
IP:  10.244.0.13
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pkznd (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-pkznd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  6m                     default-scheduler  Successfully assigned default/sp-pod to functional-950389
Warning  Failed     4m38s                  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     2m46s (x2 over 4m38s)  kubelet            Error: ErrImagePull
Warning  Failed     2m46s                  kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    2m35s (x2 over 4m37s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     2m35s (x2 over 4m37s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    2m22s (x3 over 5m55s)  kubelet            Pulling image "docker.io/nginx"
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-950389 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-950389 logs sp-pod -n default: exit status 1 (70.091022ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-950389 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-950389 -n functional-950389
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-950389 logs -n 25: (1.638493838s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image     │ functional-950389 image save --daemon kicbase/echo-server:functional-950389 --alsologtostderr                                     │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:12 UTC │ 01 Nov 25 10:12 UTC │
	│ start     │ -p functional-950389 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio                           │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:12 UTC │                     │
	│ start     │ -p functional-950389 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                     │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:12 UTC │                     │
	│ ssh       │ functional-950389 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:12 UTC │                     │
	│ mount     │ -p functional-950389 /tmp/TestFunctionalparallelMountCmdany-port2185038879/001:/mount-9p --alsologtostderr -v=1                   │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:12 UTC │                     │
	│ ssh       │ functional-950389 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:12 UTC │ 01 Nov 25 10:12 UTC │
	│ ssh       │ functional-950389 ssh -- ls -la /mount-9p                                                                                         │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:12 UTC │ 01 Nov 25 10:12 UTC │
	│ ssh       │ functional-950389 ssh cat /mount-9p/test-1761991943345565187                                                                      │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:12 UTC │ 01 Nov 25 10:12 UTC │
	│ ssh       │ functional-950389 ssh stat /mount-9p/created-by-test                                                                              │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │ 01 Nov 25 10:14 UTC │
	│ ssh       │ functional-950389 ssh stat /mount-9p/created-by-pod                                                                               │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │ 01 Nov 25 10:14 UTC │
	│ ssh       │ functional-950389 ssh sudo umount -f /mount-9p                                                                                    │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │ 01 Nov 25 10:14 UTC │
	│ mount     │ -p functional-950389 /tmp/TestFunctionalparallelMountCmdspecific-port3461040035/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │                     │
	│ ssh       │ functional-950389 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │                     │
	│ ssh       │ functional-950389 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │ 01 Nov 25 10:14 UTC │
	│ ssh       │ functional-950389 ssh -- ls -la /mount-9p                                                                                         │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │ 01 Nov 25 10:14 UTC │
	│ ssh       │ functional-950389 ssh sudo umount -f /mount-9p                                                                                    │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │                     │
	│ mount     │ -p functional-950389 /tmp/TestFunctionalparallelMountCmdVerifyCleanup996814724/001:/mount1 --alsologtostderr -v=1                 │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │                     │
	│ mount     │ -p functional-950389 /tmp/TestFunctionalparallelMountCmdVerifyCleanup996814724/001:/mount2 --alsologtostderr -v=1                 │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │                     │
	│ mount     │ -p functional-950389 /tmp/TestFunctionalparallelMountCmdVerifyCleanup996814724/001:/mount3 --alsologtostderr -v=1                 │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │                     │
	│ ssh       │ functional-950389 ssh findmnt -T /mount1                                                                                          │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │                     │
	│ ssh       │ functional-950389 ssh findmnt -T /mount1                                                                                          │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │ 01 Nov 25 10:14 UTC │
	│ ssh       │ functional-950389 ssh findmnt -T /mount2                                                                                          │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │ 01 Nov 25 10:14 UTC │
	│ ssh       │ functional-950389 ssh findmnt -T /mount3                                                                                          │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │ 01 Nov 25 10:14 UTC │
	│ mount     │ -p functional-950389 --kill=true                                                                                                  │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-950389 --alsologtostderr -v=1                                                                    │ functional-950389 │ jenkins │ v1.37.0 │ 01 Nov 25 10:14 UTC │                     │
	└───────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:12:23
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:12:23.289635   82005 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:12:23.289852   82005 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:12:23.289864   82005 out.go:374] Setting ErrFile to fd 2...
	I1101 10:12:23.289868   82005 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:12:23.290052   82005 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-70113/.minikube/bin
	I1101 10:12:23.290448   82005 out.go:368] Setting JSON to false
	I1101 10:12:23.291259   82005 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":6891,"bootTime":1761985052,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:12:23.291312   82005 start.go:143] virtualization: kvm guest
	I1101 10:12:23.293186   82005 out.go:179] * [functional-950389] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:12:23.294551   82005 notify.go:221] Checking for updates...
	I1101 10:12:23.294571   82005 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 10:12:23.296131   82005 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:12:23.297696   82005 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-70113/kubeconfig
	I1101 10:12:23.298969   82005 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-70113/.minikube
	I1101 10:12:23.300775   82005 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:12:23.302184   82005 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:12:23.303819   82005 config.go:182] Loaded profile config "functional-950389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:12:23.304225   82005 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:12:23.335203   82005 out.go:179] * Using the kvm2 driver based on existing profile
	I1101 10:12:23.336436   82005 start.go:309] selected driver: kvm2
	I1101 10:12:23.336450   82005 start.go:930] validating driver "kvm2" against &{Name:functional-950389 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-950389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.40 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:12:23.336585   82005 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:12:23.337568   82005 cni.go:84] Creating CNI manager for ""
	I1101 10:12:23.337638   82005 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 10:12:23.337691   82005 start.go:353] cluster config:
	{Name:functional-950389 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-950389 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.40 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:12:23.339173   82005 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Nov 01 10:18:09 functional-950389 crio[14025]: time="2025-11-01 10:18:09.718118220Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a7272c81-d70d-4336-a844-e58192e3a92f name=/runtime.v1.RuntimeService/Version
	Nov 01 10:18:09 functional-950389 crio[14025]: time="2025-11-01 10:18:09.719851828Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8f5f2ecf-08dc-42de-996c-a720e5be5a17 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:18:09 functional-950389 crio[14025]: time="2025-11-01 10:18:09.720507560Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761992289720485574,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:194849,},InodesUsed:&UInt64Value{Value:92,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8f5f2ecf-08dc-42de-996c-a720e5be5a17 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:18:09 functional-950389 crio[14025]: time="2025-11-01 10:18:09.720985919Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0a754a67-8038-457f-9dab-30d5dd67d4a7 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:18:09 functional-950389 crio[14025]: time="2025-11-01 10:18:09.721116323Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0a754a67-8038-457f-9dab-30d5dd67d4a7 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:18:09 functional-950389 crio[14025]: time="2025-11-01 10:18:09.721820566Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c2e6bc4021502c2f170d620789895f85434175172b836be9f091032da1f39fa8,PodSandboxId:cbcb2c689c2b5b1ca9aee13165b781ce089583dd6c1a35065fd259801c182c5a,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1761992045288650075,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60978975-8366-41aa-b97a-93a1c86afe6c,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5adf9564ca6356300d89661e1ffa64e5520291c9b1c46f0c7a2926733d15d16,PodSandboxId:6b974e48e60d25ac75bba8ab0213c66b917b8dbd93434957b6253c0360c22ff1,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1761991932929384265,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-5bb876957f-nnckx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dd6e3839-b3cf-4ec6-ad2c-a1c6d778799b,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"con
tainerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:774e7ac99a1544464667b47cbd5816a16cc6563f1fe95df87265068d45b7b5be,PodSandboxId:c72bccebda6e887772cc3ebb1c270fdd53869617cc542c8a94cc7aea17813bbf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761991895756692686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6ps9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a502e626-8a66-4687-9b76-053029dabdd6,},Annotations:map[string]string{io.kubernetes.container.hash:
e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c5bd32acfec70d052cccdfa7390d7b4719458133a155dea0720eaba86242817,PodSandboxId:1756d604cc85acc0c421877fbc05570e2e81d803392c957e624b52e6d6eff162,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5b
af0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761991895631754402,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5lfgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c9758ea-cd15-49e2-893c-e78ed7d30f55,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bea04e4d0bfb829facb73a10dc15c1004846f241b01c955bda0363f2503928b,PodSa
ndboxId:f63e3783b00886a154bf100a9ebb0b076d19678c123b9ffbc7dfc8f57e8c3606,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761991895284172691,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jtt6l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0c48c32-fe99-40bf-b651-b04105adec6b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:273ca0fdb8a91fa3e741bb6943c95a6dfb8153f908ad4f5a083bd2881a67b410,PodSandboxId:26487b7cf6878efb0c7f1e22a
323df3aa0e558454c7dda1e253950db7cf123de,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761991895219809116,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 437a47af-7662-481d-b1b7-09379f4069c9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:186f819959a1348c2af4e54006dff0c66c8117b0b9eb535946854568b1f2ca15,PodSandboxId:4761b5b9cfad0377a16669089b1cecc494620
1e1cea0b53be0485d7c076615b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761991891634539228,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832ac4e926fa9d3ad2ccc452d513f863,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58eb2
8533db8250617e51db9afe575300a277c682560c5c14a995c9b177e1dfc,PodSandboxId:88cad2727994a82f17a4c01e02294b6cf9ec60e5044d128765b3d40d437bb81c,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761991891641031515,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfd96429d7f7575fe65285b5903ca594,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePol
icy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5396a2c9c58870683b192a3fdf384b920eb701896ab62edc1c65efb081a3154,PodSandboxId:8a83bc2cd1d4b677af16c3abc1eaca1220ee5ff9436474a39929f997474b3120,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761991891584043691,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b43be368500d83ae97f6110abbf40e1,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e90b96d4e97286fd540c2d19181b1868c42d2dab4d68fa696bfc7108af3a7c77,PodSandboxId:139a554d695f05bb58bfd0e2ed05c5140d33cda05ed8689377af883672a72ae2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:8,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761991891565324561,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ef73b8d782106f4ce68a921abfa7e79,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [
{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2f46ea37f76ee460b92d3ba6f03808bc8727f53b3a42d9893fdf5c315adca62,PodSandboxId:68f5ead72a631362f93ee11bdee1d8769a1750005f9dc29ecaa9a9dda9915446,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761991853869284413,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 437a47af-7662-481d-b1b7-09379f
4069c9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d93163c734255a1ab1e2fb85b8e779b5d73e343d2f4346e8d26789c07b26e92,PodSandboxId:d9b89241d322c09b20ad49ff28f27f36d53287ba1b6d7a58950a03a0850382b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761991853344196655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5lfgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c9758ea-cd15-49e2-893c-e78ed7d30f55,},Annotations:map[string]
string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f54db03325aa1e112ef490697e276720ce6a9670cd42150d84459926faee346,PodSandboxId:6554bb887a4e46ae587ab3ce51abeef07f3943cca0a6a8c95ee7d822cf987118,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0
d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761991853156888745,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6ps9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a502e626-8a66-4687-9b76-053029dabdd6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62537ebd8ec8cfb85842acc4c972c6f4c2e
963731421d69f8dd4ef7d38a28f75,PodSandboxId:17b51286c59a8bc3eced4325f549d7a6976f82c95cf65ed363eff1520f95a85f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761991852423478827,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jtt6l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0c48c32-fe99-40bf-b651-b04105adec6b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f49d0ac915f874845d2f0bf26effc4af26f80c8bf498ef08d863d1ac072d8177,PodS
andboxId:2c85f617f556b93f76c665875b78b587e1dd47cee508740e24275ba1ea021bf9,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761991840821940954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfd96429d7f7575fe65285b5903ca594,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:35509da8a528e42be0f838e8fafb95aecf866da7c5e6b7a3463389d69257be53,PodSandboxId:1e98d8cc22231bc02a76b8b9c2987dbeb6c87d700ccceadb0052cc40b4b50f66,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:7,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761991840759609644,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ef73b8d782106f4ce68a921abfa7e79,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39af2ac349d69d2fd9b2dcd027c4c4e1bb63a93721f12f8f0dfc9945567c869c,PodSandboxId:a312f42dd95c1bbe80d4e02d4b83b3aebca6f86c6b1ee0e6490bd5c23dea3af7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761991840663345376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832ac4e926fa9d3ad2ccc452d513f863,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":102
59,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0a754a67-8038-457f-9dab-30d5dd67d4a7 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:18:09 functional-950389 crio[14025]: time="2025-11-01 10:18:09.760199307Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=16140f73-0f1a-407b-9ecb-c6c8ab1c53a3 name=/runtime.v1.RuntimeService/Version
	Nov 01 10:18:09 functional-950389 crio[14025]: time="2025-11-01 10:18:09.760272921Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=16140f73-0f1a-407b-9ecb-c6c8ab1c53a3 name=/runtime.v1.RuntimeService/Version
	Nov 01 10:18:09 functional-950389 crio[14025]: time="2025-11-01 10:18:09.761750341Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=faf04557-47d4-446e-a5ed-bbbf899b05a4 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:18:09 functional-950389 crio[14025]: time="2025-11-01 10:18:09.762675334Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761992289762649030,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:194849,},InodesUsed:&UInt64Value{Value:92,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=faf04557-47d4-446e-a5ed-bbbf899b05a4 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:18:09 functional-950389 crio[14025]: time="2025-11-01 10:18:09.763315255Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=249389e2-b1b6-4948-801d-176bba82032c name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:18:09 functional-950389 crio[14025]: time="2025-11-01 10:18:09.763385744Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=249389e2-b1b6-4948-801d-176bba82032c name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:18:09 functional-950389 crio[14025]: time="2025-11-01 10:18:09.763771713Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c2e6bc4021502c2f170d620789895f85434175172b836be9f091032da1f39fa8,PodSandboxId:cbcb2c689c2b5b1ca9aee13165b781ce089583dd6c1a35065fd259801c182c5a,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1761992045288650075,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60978975-8366-41aa-b97a-93a1c86afe6c,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5adf9564ca6356300d89661e1ffa64e5520291c9b1c46f0c7a2926733d15d16,PodSandboxId:6b974e48e60d25ac75bba8ab0213c66b917b8dbd93434957b6253c0360c22ff1,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1761991932929384265,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-5bb876957f-nnckx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dd6e3839-b3cf-4ec6-ad2c-a1c6d778799b,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"con
tainerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:774e7ac99a1544464667b47cbd5816a16cc6563f1fe95df87265068d45b7b5be,PodSandboxId:c72bccebda6e887772cc3ebb1c270fdd53869617cc542c8a94cc7aea17813bbf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761991895756692686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6ps9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a502e626-8a66-4687-9b76-053029dabdd6,},Annotations:map[string]string{io.kubernetes.container.hash:
e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c5bd32acfec70d052cccdfa7390d7b4719458133a155dea0720eaba86242817,PodSandboxId:1756d604cc85acc0c421877fbc05570e2e81d803392c957e624b52e6d6eff162,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5b
af0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761991895631754402,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5lfgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c9758ea-cd15-49e2-893c-e78ed7d30f55,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bea04e4d0bfb829facb73a10dc15c1004846f241b01c955bda0363f2503928b,PodSa
ndboxId:f63e3783b00886a154bf100a9ebb0b076d19678c123b9ffbc7dfc8f57e8c3606,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761991895284172691,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jtt6l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0c48c32-fe99-40bf-b651-b04105adec6b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:273ca0fdb8a91fa3e741bb6943c95a6dfb8153f908ad4f5a083bd2881a67b410,PodSandboxId:26487b7cf6878efb0c7f1e22a
323df3aa0e558454c7dda1e253950db7cf123de,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761991895219809116,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 437a47af-7662-481d-b1b7-09379f4069c9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:186f819959a1348c2af4e54006dff0c66c8117b0b9eb535946854568b1f2ca15,PodSandboxId:4761b5b9cfad0377a16669089b1cecc494620
1e1cea0b53be0485d7c076615b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761991891634539228,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832ac4e926fa9d3ad2ccc452d513f863,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58eb2
8533db8250617e51db9afe575300a277c682560c5c14a995c9b177e1dfc,PodSandboxId:88cad2727994a82f17a4c01e02294b6cf9ec60e5044d128765b3d40d437bb81c,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761991891641031515,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfd96429d7f7575fe65285b5903ca594,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePol
icy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5396a2c9c58870683b192a3fdf384b920eb701896ab62edc1c65efb081a3154,PodSandboxId:8a83bc2cd1d4b677af16c3abc1eaca1220ee5ff9436474a39929f997474b3120,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761991891584043691,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b43be368500d83ae97f6110abbf40e1,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e90b96d4e97286fd540c2d19181b1868c42d2dab4d68fa696bfc7108af3a7c77,PodSandboxId:139a554d695f05bb58bfd0e2ed05c5140d33cda05ed8689377af883672a72ae2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:8,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761991891565324561,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ef73b8d782106f4ce68a921abfa7e79,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [
{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2f46ea37f76ee460b92d3ba6f03808bc8727f53b3a42d9893fdf5c315adca62,PodSandboxId:68f5ead72a631362f93ee11bdee1d8769a1750005f9dc29ecaa9a9dda9915446,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761991853869284413,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 437a47af-7662-481d-b1b7-09379f
4069c9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d93163c734255a1ab1e2fb85b8e779b5d73e343d2f4346e8d26789c07b26e92,PodSandboxId:d9b89241d322c09b20ad49ff28f27f36d53287ba1b6d7a58950a03a0850382b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761991853344196655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5lfgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c9758ea-cd15-49e2-893c-e78ed7d30f55,},Annotations:map[string]
string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f54db03325aa1e112ef490697e276720ce6a9670cd42150d84459926faee346,PodSandboxId:6554bb887a4e46ae587ab3ce51abeef07f3943cca0a6a8c95ee7d822cf987118,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0
d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761991853156888745,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6ps9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a502e626-8a66-4687-9b76-053029dabdd6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62537ebd8ec8cfb85842acc4c972c6f4c2e
963731421d69f8dd4ef7d38a28f75,PodSandboxId:17b51286c59a8bc3eced4325f549d7a6976f82c95cf65ed363eff1520f95a85f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761991852423478827,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jtt6l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0c48c32-fe99-40bf-b651-b04105adec6b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f49d0ac915f874845d2f0bf26effc4af26f80c8bf498ef08d863d1ac072d8177,PodS
andboxId:2c85f617f556b93f76c665875b78b587e1dd47cee508740e24275ba1ea021bf9,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761991840821940954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfd96429d7f7575fe65285b5903ca594,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:35509da8a528e42be0f838e8fafb95aecf866da7c5e6b7a3463389d69257be53,PodSandboxId:1e98d8cc22231bc02a76b8b9c2987dbeb6c87d700ccceadb0052cc40b4b50f66,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:7,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761991840759609644,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ef73b8d782106f4ce68a921abfa7e79,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39af2ac349d69d2fd9b2dcd027c4c4e1bb63a93721f12f8f0dfc9945567c869c,PodSandboxId:a312f42dd95c1bbe80d4e02d4b83b3aebca6f86c6b1ee0e6490bd5c23dea3af7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761991840663345376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832ac4e926fa9d3ad2ccc452d513f863,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":102
59,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=249389e2-b1b6-4948-801d-176bba82032c name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:18:09 functional-950389 crio[14025]: time="2025-11-01 10:18:09.795026705Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=80cc4ce3-6bca-4a9b-aa86-fc5348a6720f name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 01 10:18:09 functional-950389 crio[14025]: time="2025-11-01 10:18:09.795779120Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:2206d0a0ca9a11a75117230cff6b51f399d76981fd11314cdd1348d0c31a35c4,Metadata:&PodSandboxMetadata{Name:dashboard-metrics-scraper-77bf4d6c4c-ljjhn,Uid:4eed9960-2472-42b7-bcd3-4ef596df7b49,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761992052668615633,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: dashboard-metrics-scraper-77bf4d6c4c-ljjhn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 4eed9960-2472-42b7-bcd3-4ef596df7b49,k8s-app: dashboard-metrics-scraper,pod-template-hash: 77bf4d6c4c,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T10:14:12.342953312Z,kubernetes.io/config.source: api,seccomp.security.alpha.kubernetes.io/pod: runtime/default,},RuntimeHandler:,},&PodSandbox{Id:9f9e977cc4c5f75e55fcba1ec55d08a8a992f8fac7fd21a99f6
a7a820448b973,Metadata:&PodSandboxMetadata{Name:kubernetes-dashboard-855c9754f9-wm424,Uid:6078d1e3-19c1-4501-9499-752f92e11376,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761992052608000501,Labels:map[string]string{gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kubernetes-dashboard-855c9754f9-wm424,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 6078d1e3-19c1-4501-9499-752f92e11376,k8s-app: kubernetes-dashboard,pod-template-hash: 855c9754f9,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T10:14:12.283497153Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cbcb2c689c2b5b1ca9aee13165b781ce089583dd6c1a35065fd259801c182c5a,Metadata:&PodSandboxMetadata{Name:busybox-mount,Uid:60978975-8366-41aa-b97a-93a1c86afe6c,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1761991944824425359,Labels:map[string]string{integration-test: busybox-mount,io.kubernetes.container.name: POD,
io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60978975-8366-41aa-b97a-93a1c86afe6c,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T10:12:24.500677330Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f3e0030d38b65031d2143668ce214339e12fae726517239a313a4e6dc86ea1bc,Metadata:&PodSandboxMetadata{Name:hello-node-75c85bcc94-wws2s,Uid:d32fd7e0-500b-4734-88ed-9a2fdbad7f04,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761991936293594494,Labels:map[string]string{app: hello-node,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-node-75c85bcc94-wws2s,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d32fd7e0-500b-4734-88ed-9a2fdbad7f04,pod-template-hash: 75c85bcc94,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T10:12:15.971381795Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:97d84c97619508a942b50015d715d4c91d3a72452e3b25ac696ed985311b40ba,Metada
ta:&PodSandboxMetadata{Name:sp-pod,Uid:9629694c-f849-48d0-8099-8989879acb4b,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761991932734540590,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: sp-pod,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9629694c-f849-48d0-8099-8989879acb4b,test: storage-provisioner,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"test\":\"storage-provisioner\"},\"name\":\"sp-pod\",\"namespace\":\"default\"},\"spec\":{\"containers\":[{\"image\":\"docker.io/nginx\",\"name\":\"myfrontend\",\"volumeMounts\":[{\"mountPath\":\"/tmp/mount\",\"name\":\"mypd\"}]}],\"volumes\":[{\"name\":\"mypd\",\"persistentVolumeClaim\":{\"claimName\":\"myclaim\"}}]}}\n,kubernetes.io/config.seen: 2025-11-01T10:12:08.612711445Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b6dbf761f3c666c59882dd0a3b39b7cf0a2fa8
e99c3659fbfe9f86f997d537b5,Metadata:&PodSandboxMetadata{Name:hello-node-connect-7d85dfc575-t7gtf,Uid:e6aa5eba-2bc1-4f18-9d27-1e0bc284884d,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761991922437978400,Labels:map[string]string{app: hello-node-connect,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-node-connect-7d85dfc575-t7gtf,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e6aa5eba-2bc1-4f18-9d27-1e0bc284884d,pod-template-hash: 7d85dfc575,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T10:12:02.051312145Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6b974e48e60d25ac75bba8ab0213c66b917b8dbd93434957b6253c0360c22ff1,Metadata:&PodSandboxMetadata{Name:mysql-5bb876957f-nnckx,Uid:dd6e3839-b3cf-4ec6-ad2c-a1c6d778799b,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761991920248185293,Labels:map[string]string{app: mysql,io.kubernetes.container.name: POD,io.kubernetes.pod.name: mysql-5bb876957f-nnckx,io.kubernetes.pod.
namespace: default,io.kubernetes.pod.uid: dd6e3839-b3cf-4ec6-ad2c-a1c6d778799b,pod-template-hash: 5bb876957f,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T10:11:59.918429191Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c72bccebda6e887772cc3ebb1c270fdd53869617cc542c8a94cc7aea17813bbf,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-6ps9z,Uid:a502e626-8a66-4687-9b76-053029dabdd6,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1761991895141922043,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-6ps9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a502e626-8a66-4687-9b76-053029dabdd6,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T10:11:34.660051334Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f63e3783b00886a154bf100a9ebb0b076d19678c123b9ffbc7dfc8f57e8c3606,Metadata:&PodSandboxMetadata
{Name:kube-proxy-jtt6l,Uid:a0c48c32-fe99-40bf-b651-b04105adec6b,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1761991895012845129,Labels:map[string]string{controller-revision-hash: 66486579fc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-jtt6l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0c48c32-fe99-40bf-b651-b04105adec6b,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T10:11:34.660101452Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:26487b7cf6878efb0c7f1e22a323df3aa0e558454c7dda1e253950db7cf123de,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:437a47af-7662-481d-b1b7-09379f4069c9,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1761991895008437551,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 437a47af-7662-481d-b1b7-09379f4069c9,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-11-01T10:11:34.660106144Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1756d604cc85acc0c421877fbc05570e2e81d803392c957e624b52e6d6
eff162,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-5lfgh,Uid:7c9758ea-cd15-49e2-893c-e78ed7d30f55,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1761991895007610768,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-5lfgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c9758ea-cd15-49e2-893c-e78ed7d30f55,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T10:11:34.660107415Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4761b5b9cfad0377a16669089b1cecc4946201e1cea0b53be0485d7c076615b9,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-950389,Uid:832ac4e926fa9d3ad2ccc452d513f863,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1761991891381843249,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-950389,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 832ac4e926fa9d3ad2ccc452d513f863,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 832ac4e926fa9d3ad2ccc452d513f863,kubernetes.io/config.seen: 2025-11-01T10:11:30.664667748Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:139a554d695f05bb58bfd0e2ed05c5140d33cda05ed8689377af883672a72ae2,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-950389,Uid:5ef73b8d782106f4ce68a921abfa7e79,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1761991891360194547,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ef73b8d782106f4ce68a921abfa7e79,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5ef73b8d782106f4ce68a921abfa7e79,kubernetes.io/config.seen: 2025-11-01T10:11:30.664666909Z,kubernetes.io/config.s
ource: file,},RuntimeHandler:,},&PodSandbox{Id:88cad2727994a82f17a4c01e02294b6cf9ec60e5044d128765b3d40d437bb81c,Metadata:&PodSandboxMetadata{Name:etcd-functional-950389,Uid:cfd96429d7f7575fe65285b5903ca594,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1761991891359655352,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfd96429d7f7575fe65285b5903ca594,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.40:2379,kubernetes.io/config.hash: cfd96429d7f7575fe65285b5903ca594,kubernetes.io/config.seen: 2025-11-01T10:11:30.664662286Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8a83bc2cd1d4b677af16c3abc1eaca1220ee5ff9436474a39929f997474b3120,Metadata:&PodSandboxMetadata{Name:kube-apiserver-functional-950389,Uid:9b43be368500d83ae97f6110abbf40e1,Namespace:kube-system,Attempt:
0,},State:SANDBOX_READY,CreatedAt:1761991891357669917,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b43be368500d83ae97f6110abbf40e1,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.40:8441,kubernetes.io/config.hash: 9b43be368500d83ae97f6110abbf40e1,kubernetes.io/config.seen: 2025-11-01T10:11:30.664665782Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:68f5ead72a631362f93ee11bdee1d8769a1750005f9dc29ecaa9a9dda9915446,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:437a47af-7662-481d-b1b7-09379f4069c9,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1761991853770631626,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.nam
e: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 437a47af-7662-481d-b1b7-09379f4069c9,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-11-01T10:10:53.450871709Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d9b89241d322c09b20ad49ff28f27f
36d53287ba1b6d7a58950a03a0850382b5,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-5lfgh,Uid:7c9758ea-cd15-49e2-893c-e78ed7d30f55,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1761991852586184309,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-5lfgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c9758ea-cd15-49e2-893c-e78ed7d30f55,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T10:10:52.194020887Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6554bb887a4e46ae587ab3ce51abeef07f3943cca0a6a8c95ee7d822cf987118,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-6ps9z,Uid:a502e626-8a66-4687-9b76-053029dabdd6,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1761991852476793471,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-6ps9z,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: a502e626-8a66-4687-9b76-053029dabdd6,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T10:10:52.143661391Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:17b51286c59a8bc3eced4325f549d7a6976f82c95cf65ed363eff1520f95a85f,Metadata:&PodSandboxMetadata{Name:kube-proxy-jtt6l,Uid:a0c48c32-fe99-40bf-b651-b04105adec6b,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1761991852171101167,Labels:map[string]string{controller-revision-hash: 66486579fc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-jtt6l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0c48c32-fe99-40bf-b651-b04105adec6b,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T10:10:51.834586044Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1e98d8cc22231bc02a76b8b9c2987dbeb6c87d700ccceadb0052cc40b
4b50f66,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-950389,Uid:5ef73b8d782106f4ce68a921abfa7e79,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1761991840534595372,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ef73b8d782106f4ce68a921abfa7e79,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5ef73b8d782106f4ce68a921abfa7e79,kubernetes.io/config.seen: 2025-11-01T10:10:40.044391953Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2c85f617f556b93f76c665875b78b587e1dd47cee508740e24275ba1ea021bf9,Metadata:&PodSandboxMetadata{Name:etcd-functional-950389,Uid:cfd96429d7f7575fe65285b5903ca594,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1761991840517701569,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD
,io.kubernetes.pod.name: etcd-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfd96429d7f7575fe65285b5903ca594,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.40:2379,kubernetes.io/config.hash: cfd96429d7f7575fe65285b5903ca594,kubernetes.io/config.seen: 2025-11-01T10:10:40.044389457Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a312f42dd95c1bbe80d4e02d4b83b3aebca6f86c6b1ee0e6490bd5c23dea3af7,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-950389,Uid:832ac4e926fa9d3ad2ccc452d513f863,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1761991840482777196,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832ac4e926fa9d3ad2ccc452d513f863,tier: control-plane,},Annotations:map[string]string{kubernetes.io/co
nfig.hash: 832ac4e926fa9d3ad2ccc452d513f863,kubernetes.io/config.seen: 2025-11-01T10:10:40.044386409Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=80cc4ce3-6bca-4a9b-aa86-fc5348a6720f name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 01 10:18:09 functional-950389 crio[14025]: time="2025-11-01 10:18:09.797275922Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=da988756-1282-4603-83a1-45c84e3ff654 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:18:09 functional-950389 crio[14025]: time="2025-11-01 10:18:09.797351873Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=da988756-1282-4603-83a1-45c84e3ff654 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:18:09 functional-950389 crio[14025]: time="2025-11-01 10:18:09.797688848Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c2e6bc4021502c2f170d620789895f85434175172b836be9f091032da1f39fa8,PodSandboxId:cbcb2c689c2b5b1ca9aee13165b781ce089583dd6c1a35065fd259801c182c5a,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1761992045288650075,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60978975-8366-41aa-b97a-93a1c86afe6c,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5adf9564ca6356300d89661e1ffa64e5520291c9b1c46f0c7a2926733d15d16,PodSandboxId:6b974e48e60d25ac75bba8ab0213c66b917b8dbd93434957b6253c0360c22ff1,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1761991932929384265,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-5bb876957f-nnckx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dd6e3839-b3cf-4ec6-ad2c-a1c6d778799b,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"con
tainerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:774e7ac99a1544464667b47cbd5816a16cc6563f1fe95df87265068d45b7b5be,PodSandboxId:c72bccebda6e887772cc3ebb1c270fdd53869617cc542c8a94cc7aea17813bbf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761991895756692686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6ps9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a502e626-8a66-4687-9b76-053029dabdd6,},Annotations:map[string]string{io.kubernetes.container.hash:
e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c5bd32acfec70d052cccdfa7390d7b4719458133a155dea0720eaba86242817,PodSandboxId:1756d604cc85acc0c421877fbc05570e2e81d803392c957e624b52e6d6eff162,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5b
af0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761991895631754402,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5lfgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c9758ea-cd15-49e2-893c-e78ed7d30f55,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bea04e4d0bfb829facb73a10dc15c1004846f241b01c955bda0363f2503928b,PodSa
ndboxId:f63e3783b00886a154bf100a9ebb0b076d19678c123b9ffbc7dfc8f57e8c3606,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761991895284172691,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jtt6l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0c48c32-fe99-40bf-b651-b04105adec6b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:273ca0fdb8a91fa3e741bb6943c95a6dfb8153f908ad4f5a083bd2881a67b410,PodSandboxId:26487b7cf6878efb0c7f1e22a
323df3aa0e558454c7dda1e253950db7cf123de,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761991895219809116,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 437a47af-7662-481d-b1b7-09379f4069c9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:186f819959a1348c2af4e54006dff0c66c8117b0b9eb535946854568b1f2ca15,PodSandboxId:4761b5b9cfad0377a16669089b1cecc494620
1e1cea0b53be0485d7c076615b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761991891634539228,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832ac4e926fa9d3ad2ccc452d513f863,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58eb2
8533db8250617e51db9afe575300a277c682560c5c14a995c9b177e1dfc,PodSandboxId:88cad2727994a82f17a4c01e02294b6cf9ec60e5044d128765b3d40d437bb81c,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761991891641031515,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfd96429d7f7575fe65285b5903ca594,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePol
icy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5396a2c9c58870683b192a3fdf384b920eb701896ab62edc1c65efb081a3154,PodSandboxId:8a83bc2cd1d4b677af16c3abc1eaca1220ee5ff9436474a39929f997474b3120,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761991891584043691,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b43be368500d83ae97f6110abbf40e1,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e90b96d4e97286fd540c2d19181b1868c42d2dab4d68fa696bfc7108af3a7c77,PodSandboxId:139a554d695f05bb58bfd0e2ed05c5140d33cda05ed8689377af883672a72ae2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:8,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761991891565324561,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ef73b8d782106f4ce68a921abfa7e79,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [
{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2f46ea37f76ee460b92d3ba6f03808bc8727f53b3a42d9893fdf5c315adca62,PodSandboxId:68f5ead72a631362f93ee11bdee1d8769a1750005f9dc29ecaa9a9dda9915446,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761991853869284413,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 437a47af-7662-481d-b1b7-09379f
4069c9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d93163c734255a1ab1e2fb85b8e779b5d73e343d2f4346e8d26789c07b26e92,PodSandboxId:d9b89241d322c09b20ad49ff28f27f36d53287ba1b6d7a58950a03a0850382b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761991853344196655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5lfgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c9758ea-cd15-49e2-893c-e78ed7d30f55,},Annotations:map[string]
string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f54db03325aa1e112ef490697e276720ce6a9670cd42150d84459926faee346,PodSandboxId:6554bb887a4e46ae587ab3ce51abeef07f3943cca0a6a8c95ee7d822cf987118,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0
d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761991853156888745,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6ps9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a502e626-8a66-4687-9b76-053029dabdd6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62537ebd8ec8cfb85842acc4c972c6f4c2e
963731421d69f8dd4ef7d38a28f75,PodSandboxId:17b51286c59a8bc3eced4325f549d7a6976f82c95cf65ed363eff1520f95a85f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761991852423478827,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jtt6l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0c48c32-fe99-40bf-b651-b04105adec6b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f49d0ac915f874845d2f0bf26effc4af26f80c8bf498ef08d863d1ac072d8177,PodS
andboxId:2c85f617f556b93f76c665875b78b587e1dd47cee508740e24275ba1ea021bf9,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761991840821940954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfd96429d7f7575fe65285b5903ca594,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:35509da8a528e42be0f838e8fafb95aecf866da7c5e6b7a3463389d69257be53,PodSandboxId:1e98d8cc22231bc02a76b8b9c2987dbeb6c87d700ccceadb0052cc40b4b50f66,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:7,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761991840759609644,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ef73b8d782106f4ce68a921abfa7e79,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39af2ac349d69d2fd9b2dcd027c4c4e1bb63a93721f12f8f0dfc9945567c869c,PodSandboxId:a312f42dd95c1bbe80d4e02d4b83b3aebca6f86c6b1ee0e6490bd5c23dea3af7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761991840663345376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832ac4e926fa9d3ad2ccc452d513f863,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":102
59,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=da988756-1282-4603-83a1-45c84e3ff654 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:18:09 functional-950389 crio[14025]: time="2025-11-01 10:18:09.813791798Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9d874b57-755b-4e8a-b243-09a3f335bd24 name=/runtime.v1.RuntimeService/Version
	Nov 01 10:18:09 functional-950389 crio[14025]: time="2025-11-01 10:18:09.813878442Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9d874b57-755b-4e8a-b243-09a3f335bd24 name=/runtime.v1.RuntimeService/Version
	Nov 01 10:18:09 functional-950389 crio[14025]: time="2025-11-01 10:18:09.815265841Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6ee78647-29df-4563-8099-f1f106e1ce92 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:18:09 functional-950389 crio[14025]: time="2025-11-01 10:18:09.817792718Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761992289817728570,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:194849,},InodesUsed:&UInt64Value{Value:92,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6ee78647-29df-4563-8099-f1f106e1ce92 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:18:09 functional-950389 crio[14025]: time="2025-11-01 10:18:09.818700023Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=78ec279d-10d1-4ed4-8985-f5e25733de23 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:18:09 functional-950389 crio[14025]: time="2025-11-01 10:18:09.818775491Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=78ec279d-10d1-4ed4-8985-f5e25733de23 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:18:09 functional-950389 crio[14025]: time="2025-11-01 10:18:09.819363721Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c2e6bc4021502c2f170d620789895f85434175172b836be9f091032da1f39fa8,PodSandboxId:cbcb2c689c2b5b1ca9aee13165b781ce089583dd6c1a35065fd259801c182c5a,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1761992045288650075,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 60978975-8366-41aa-b97a-93a1c86afe6c,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5adf9564ca6356300d89661e1ffa64e5520291c9b1c46f0c7a2926733d15d16,PodSandboxId:6b974e48e60d25ac75bba8ab0213c66b917b8dbd93434957b6253c0360c22ff1,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1761991932929384265,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-5bb876957f-nnckx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: dd6e3839-b3cf-4ec6-ad2c-a1c6d778799b,},Annotations:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"con
tainerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:774e7ac99a1544464667b47cbd5816a16cc6563f1fe95df87265068d45b7b5be,PodSandboxId:c72bccebda6e887772cc3ebb1c270fdd53869617cc542c8a94cc7aea17813bbf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761991895756692686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6ps9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a502e626-8a66-4687-9b76-053029dabdd6,},Annotations:map[string]string{io.kubernetes.container.hash:
e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c5bd32acfec70d052cccdfa7390d7b4719458133a155dea0720eaba86242817,PodSandboxId:1756d604cc85acc0c421877fbc05570e2e81d803392c957e624b52e6d6eff162,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5b
af0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761991895631754402,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5lfgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c9758ea-cd15-49e2-893c-e78ed7d30f55,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bea04e4d0bfb829facb73a10dc15c1004846f241b01c955bda0363f2503928b,PodSa
ndboxId:f63e3783b00886a154bf100a9ebb0b076d19678c123b9ffbc7dfc8f57e8c3606,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761991895284172691,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jtt6l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0c48c32-fe99-40bf-b651-b04105adec6b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:273ca0fdb8a91fa3e741bb6943c95a6dfb8153f908ad4f5a083bd2881a67b410,PodSandboxId:26487b7cf6878efb0c7f1e22a
323df3aa0e558454c7dda1e253950db7cf123de,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761991895219809116,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 437a47af-7662-481d-b1b7-09379f4069c9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:186f819959a1348c2af4e54006dff0c66c8117b0b9eb535946854568b1f2ca15,PodSandboxId:4761b5b9cfad0377a16669089b1cecc494620
1e1cea0b53be0485d7c076615b9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761991891634539228,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832ac4e926fa9d3ad2ccc452d513f863,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:58eb2
8533db8250617e51db9afe575300a277c682560c5c14a995c9b177e1dfc,PodSandboxId:88cad2727994a82f17a4c01e02294b6cf9ec60e5044d128765b3d40d437bb81c,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761991891641031515,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfd96429d7f7575fe65285b5903ca594,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePol
icy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5396a2c9c58870683b192a3fdf384b920eb701896ab62edc1c65efb081a3154,PodSandboxId:8a83bc2cd1d4b677af16c3abc1eaca1220ee5ff9436474a39929f997474b3120,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761991891584043691,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9b43be368500d83ae97f6110abbf40e1,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e90b96d4e97286fd540c2d19181b1868c42d2dab4d68fa696bfc7108af3a7c77,PodSandboxId:139a554d695f05bb58bfd0e2ed05c5140d33cda05ed8689377af883672a72ae2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:8,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761991891565324561,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ef73b8d782106f4ce68a921abfa7e79,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [
{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2f46ea37f76ee460b92d3ba6f03808bc8727f53b3a42d9893fdf5c315adca62,PodSandboxId:68f5ead72a631362f93ee11bdee1d8769a1750005f9dc29ecaa9a9dda9915446,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761991853869284413,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 437a47af-7662-481d-b1b7-09379f
4069c9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d93163c734255a1ab1e2fb85b8e779b5d73e343d2f4346e8d26789c07b26e92,PodSandboxId:d9b89241d322c09b20ad49ff28f27f36d53287ba1b6d7a58950a03a0850382b5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761991853344196655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-5lfgh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c9758ea-cd15-49e2-893c-e78ed7d30f55,},Annotations:map[string]
string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f54db03325aa1e112ef490697e276720ce6a9670cd42150d84459926faee346,PodSandboxId:6554bb887a4e46ae587ab3ce51abeef07f3943cca0a6a8c95ee7d822cf987118,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0
d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761991853156888745,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-6ps9z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a502e626-8a66-4687-9b76-053029dabdd6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62537ebd8ec8cfb85842acc4c972c6f4c2e
963731421d69f8dd4ef7d38a28f75,PodSandboxId:17b51286c59a8bc3eced4325f549d7a6976f82c95cf65ed363eff1520f95a85f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761991852423478827,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-jtt6l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0c48c32-fe99-40bf-b651-b04105adec6b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f49d0ac915f874845d2f0bf26effc4af26f80c8bf498ef08d863d1ac072d8177,PodS
andboxId:2c85f617f556b93f76c665875b78b587e1dd47cee508740e24275ba1ea021bf9,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761991840821940954,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfd96429d7f7575fe65285b5903ca594,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Conta
iner{Id:35509da8a528e42be0f838e8fafb95aecf866da7c5e6b7a3463389d69257be53,PodSandboxId:1e98d8cc22231bc02a76b8b9c2987dbeb6c87d700ccceadb0052cc40b4b50f66,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:7,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761991840759609644,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ef73b8d782106f4ce68a921abfa7e79,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39af2ac349d69d2fd9b2dcd027c4c4e1bb63a93721f12f8f0dfc9945567c869c,PodSandboxId:a312f42dd95c1bbe80d4e02d4b83b3aebca6f86c6b1ee0e6490bd5c23dea3af7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761991840663345376,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-950389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 832ac4e926fa9d3ad2ccc452d513f863,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":102
59,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=78ec279d-10d1-4ed4-8985-f5e25733de23 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c2e6bc4021502       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   4 minutes ago       Exited              mount-munger              0                   cbcb2c689c2b5       busybox-mount
	a5adf9564ca63       docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb       5 minutes ago       Running             mysql                     0                   6b974e48e60d2       mysql-5bb876957f-nnckx
	774e7ac99a154       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      6 minutes ago       Running             coredns                   1                   c72bccebda6e8       coredns-66bc5c9577-6ps9z
	9c5bd32acfec7       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      6 minutes ago       Running             coredns                   1                   1756d604cc85a       coredns-66bc5c9577-5lfgh
	0bea04e4d0bfb       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      6 minutes ago       Running             kube-proxy                1                   f63e3783b0088       kube-proxy-jtt6l
	273ca0fdb8a91       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       1                   26487b7cf6878       storage-provisioner
	58eb28533db82       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      6 minutes ago       Running             etcd                      4                   88cad2727994a       etcd-functional-950389
	186f819959a13       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      6 minutes ago       Running             kube-scheduler            4                   4761b5b9cfad0       kube-scheduler-functional-950389
	b5396a2c9c588       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      6 minutes ago       Running             kube-apiserver            0                   8a83bc2cd1d4b       kube-apiserver-functional-950389
	e90b96d4e9728       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      6 minutes ago       Running             kube-controller-manager   8                   139a554d695f0       kube-controller-manager-functional-950389
	a2f46ea37f76e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Exited              storage-provisioner       0                   68f5ead72a631       storage-provisioner
	8d93163c73425       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      7 minutes ago       Exited              coredns                   0                   d9b89241d322c       coredns-66bc5c9577-5lfgh
	9f54db03325aa       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      7 minutes ago       Exited              coredns                   0                   6554bb887a4e4       coredns-66bc5c9577-6ps9z
	62537ebd8ec8c       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      7 minutes ago       Exited              kube-proxy                0                   17b51286c59a8       kube-proxy-jtt6l
	f49d0ac915f87       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      7 minutes ago       Exited              etcd                      3                   2c85f617f556b       etcd-functional-950389
	35509da8a528e       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      7 minutes ago       Exited              kube-controller-manager   7                   1e98d8cc22231       kube-controller-manager-functional-950389
	39af2ac349d69       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      7 minutes ago       Exited              kube-scheduler            3                   a312f42dd95c1       kube-scheduler-functional-950389
	
	
	==> coredns [774e7ac99a1544464667b47cbd5816a16cc6563f1fe95df87265068d45b7b5be] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	
	
	==> coredns [8d93163c734255a1ab1e2fb85b8e779b5d73e343d2f4346e8d26789c07b26e92] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9c5bd32acfec70d052cccdfa7390d7b4719458133a155dea0720eaba86242817] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	
	
	==> coredns [9f54db03325aa1e112ef490697e276720ce6a9670cd42150d84459926faee346] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-950389
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-950389
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=functional-950389
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_10_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:10:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-950389
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:18:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:14:38 +0000   Sat, 01 Nov 2025 10:10:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:14:38 +0000   Sat, 01 Nov 2025 10:10:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:14:38 +0000   Sat, 01 Nov 2025 10:10:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:14:38 +0000   Sat, 01 Nov 2025 10:10:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.40
	  Hostname:    functional-950389
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 50219d7eb2434b54bfeb5a3ddfefd678
	  System UUID:                50219d7e-b243-4b54-bfeb-5a3ddfefd678
	  Boot ID:                    f9fafb52-9d25-4c51-b234-2193020a6a0b
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-wws2s                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  default                     hello-node-connect-7d85dfc575-t7gtf           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m8s
	  default                     mysql-5bb876957f-nnckx                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    6m11s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 coredns-66bc5c9577-5lfgh                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     7m18s
	  kube-system                 coredns-66bc5c9577-6ps9z                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     7m18s
	  kube-system                 etcd-functional-950389                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         7m23s
	  kube-system                 kube-apiserver-functional-950389              250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m34s
	  kube-system                 kube-controller-manager-functional-950389     200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m23s
	  kube-system                 kube-proxy-jtt6l                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m19s
	  kube-system                 kube-scheduler-functional-950389              100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m23s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m17s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-ljjhn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m58s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-wm424         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (72%)  700m (35%)
	  memory             752Mi (19%)  1040Mi (26%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m17s                  kube-proxy       
	  Normal  Starting                 6m33s                  kube-proxy       
	  Normal  Starting                 7m30s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m30s (x2 over 7m30s)  kubelet          Node functional-950389 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m30s (x2 over 7m30s)  kubelet          Node functional-950389 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m30s (x2 over 7m30s)  kubelet          Node functional-950389 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m23s                  kubelet          Node functional-950389 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m23s                  kubelet          Node functional-950389 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m23s                  kubelet          Node functional-950389 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m20s                  node-controller  Node functional-950389 event: Registered Node functional-950389 in Controller
	  Normal  Starting                 6m40s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m39s (x8 over 6m40s)  kubelet          Node functional-950389 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m39s (x8 over 6m40s)  kubelet          Node functional-950389 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m39s (x7 over 6m40s)  kubelet          Node functional-950389 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m33s                  node-controller  Node functional-950389 event: Registered Node functional-950389 in Controller
	
	
	==> dmesg <==
	[Nov 1 10:04] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.000074] kauditd_printk_skb: 18 callbacks suppressed
	[ +10.833497] kauditd_printk_skb: 249 callbacks suppressed
	[ +20.643469] kauditd_printk_skb: 38 callbacks suppressed
	[ +13.048600] kauditd_printk_skb: 11 callbacks suppressed
	[Nov 1 10:06] kauditd_printk_skb: 263 callbacks suppressed
	[ +13.559184] kauditd_printk_skb: 154 callbacks suppressed
	[Nov 1 10:07] kauditd_printk_skb: 5 callbacks suppressed
	[ +11.574466] kauditd_printk_skb: 5 callbacks suppressed
	[Nov 1 10:08] kauditd_printk_skb: 5 callbacks suppressed
	[Nov 1 10:10] kauditd_printk_skb: 5 callbacks suppressed
	[  +0.100652] kauditd_printk_skb: 28 callbacks suppressed
	[  +0.156569] kauditd_printk_skb: 132 callbacks suppressed
	[  +0.226013] kauditd_printk_skb: 12 callbacks suppressed
	[Nov 1 10:11] kauditd_printk_skb: 170 callbacks suppressed
	[  +0.112112] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.976961] kauditd_printk_skb: 232 callbacks suppressed
	[  +4.355116] kauditd_printk_skb: 154 callbacks suppressed
	[ +18.317316] kauditd_printk_skb: 167 callbacks suppressed
	[Nov 1 10:12] kauditd_printk_skb: 97 callbacks suppressed
	[  +0.000026] kauditd_printk_skb: 80 callbacks suppressed
	[  +0.000138] kauditd_printk_skb: 95 callbacks suppressed
	[  +6.081767] kauditd_printk_skb: 26 callbacks suppressed
	[Nov 1 10:14] kauditd_printk_skb: 29 callbacks suppressed
	[Nov 1 10:15] kauditd_printk_skb: 68 callbacks suppressed
	
	
	==> etcd [58eb28533db8250617e51db9afe575300a277c682560c5c14a995c9b177e1dfc] <==
	{"level":"warn","ts":"2025-11-01T10:11:33.817095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:11:33.863396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46438","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T10:12:08.583551Z","caller":"traceutil/trace.go:172","msg":"trace[1153351526] transaction","detail":"{read_only:false; response_revision:606; number_of_response:1; }","duration":"360.077961ms","start":"2025-11-01T10:12:08.223461Z","end":"2025-11-01T10:12:08.583539Z","steps":["trace[1153351526] 'process raft request'  (duration: 360.001164ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:12:08.584011Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T10:12:08.223440Z","time spent":"360.166919ms","remote":"127.0.0.1:45660","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1934,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/default/sp-pod\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/default/sp-pod\" value_size:1897 >> failure:<>"}
	{"level":"info","ts":"2025-11-01T10:12:11.557197Z","caller":"traceutil/trace.go:172","msg":"trace[723015122] linearizableReadLoop","detail":"{readStateIndex:656; appliedIndex:656; }","duration":"429.403939ms","start":"2025-11-01T10:12:11.127776Z","end":"2025-11-01T10:12:11.557180Z","steps":["trace[723015122] 'read index received'  (duration: 429.399712ms)","trace[723015122] 'applied index is now lower than readState.Index'  (duration: 3.459µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T10:12:11.557472Z","caller":"traceutil/trace.go:172","msg":"trace[657617144] transaction","detail":"{read_only:false; response_revision:611; number_of_response:1; }","duration":"464.683385ms","start":"2025-11-01T10:12:11.092781Z","end":"2025-11-01T10:12:11.557464Z","steps":["trace[657617144] 'process raft request'  (duration: 464.558391ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:12:11.558571Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T10:12:11.092765Z","time spent":"465.635981ms","remote":"127.0.0.1:45614","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:610 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-11-01T10:12:11.557687Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"429.873539ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T10:12:11.560319Z","caller":"traceutil/trace.go:172","msg":"trace[866693380] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:611; }","duration":"432.534968ms","start":"2025-11-01T10:12:11.127773Z","end":"2025-11-01T10:12:11.560308Z","steps":["trace[866693380] 'agreement among raft nodes before linearized reading'  (duration: 429.857759ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:12:11.560351Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T10:12:11.127757Z","time spent":"432.58258ms","remote":"127.0.0.1:45660","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2025-11-01T10:12:14.164943Z","caller":"traceutil/trace.go:172","msg":"trace[762212931] transaction","detail":"{read_only:false; response_revision:617; number_of_response:1; }","duration":"217.662746ms","start":"2025-11-01T10:12:13.947270Z","end":"2025-11-01T10:12:14.164932Z","steps":["trace[762212931] 'process raft request'  (duration: 217.469939ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:12:20.244562Z","caller":"traceutil/trace.go:172","msg":"trace[666249141] linearizableReadLoop","detail":"{readStateIndex:693; appliedIndex:693; }","duration":"427.871999ms","start":"2025-11-01T10:12:19.816674Z","end":"2025-11-01T10:12:20.244546Z","steps":["trace[666249141] 'read index received'  (duration: 427.867439ms)","trace[666249141] 'applied index is now lower than readState.Index'  (duration: 3.794µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T10:12:20.244658Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"427.993311ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T10:12:20.244674Z","caller":"traceutil/trace.go:172","msg":"trace[1789657826] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:645; }","duration":"428.022762ms","start":"2025-11-01T10:12:19.816647Z","end":"2025-11-01T10:12:20.244669Z","steps":["trace[1789657826] 'agreement among raft nodes before linearized reading'  (duration: 427.978929ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:12:20.244757Z","caller":"traceutil/trace.go:172","msg":"trace[1443099741] transaction","detail":"{read_only:false; response_revision:646; number_of_response:1; }","duration":"622.159008ms","start":"2025-11-01T10:12:19.622588Z","end":"2025-11-01T10:12:20.244747Z","steps":["trace[1443099741] 'process raft request'  (duration: 622.037625ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:12:20.244849Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"406.619182ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T10:12:20.245561Z","caller":"traceutil/trace.go:172","msg":"trace[1986868595] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:646; }","duration":"407.332043ms","start":"2025-11-01T10:12:19.838220Z","end":"2025-11-01T10:12:20.245553Z","steps":["trace[1986868595] 'agreement among raft nodes before linearized reading'  (duration: 406.601192ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:12:20.245621Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T10:12:19.838199Z","time spent":"407.411542ms","remote":"127.0.0.1:45660","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2025-11-01T10:12:20.244878Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"117.365768ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T10:12:20.245875Z","caller":"traceutil/trace.go:172","msg":"trace[2012618942] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:646; }","duration":"118.354669ms","start":"2025-11-01T10:12:20.127509Z","end":"2025-11-01T10:12:20.245864Z","steps":["trace[2012618942] 'agreement among raft nodes before linearized reading'  (duration: 117.357716ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:12:20.244894Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"201.996816ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T10:12:20.246312Z","caller":"traceutil/trace.go:172","msg":"trace[524363206] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:646; }","duration":"203.322528ms","start":"2025-11-01T10:12:20.042894Z","end":"2025-11-01T10:12:20.246217Z","steps":["trace[524363206] 'agreement among raft nodes before linearized reading'  (duration: 201.992234ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:12:20.245252Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"279.558918ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T10:12:20.246654Z","caller":"traceutil/trace.go:172","msg":"trace[1402734132] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:646; }","duration":"280.961849ms","start":"2025-11-01T10:12:19.965685Z","end":"2025-11-01T10:12:20.246647Z","steps":["trace[1402734132] 'agreement among raft nodes before linearized reading'  (duration: 279.545002ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T10:12:20.246164Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T10:12:19.622571Z","time spent":"622.75989ms","remote":"127.0.0.1:45614","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:645 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> etcd [f49d0ac915f874845d2f0bf26effc4af26f80c8bf498ef08d863d1ac072d8177] <==
	{"level":"warn","ts":"2025-11-01T10:10:43.150416Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:43.155518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:43.174317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:43.176931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:43.185561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:10:43.248134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57298","server-name":"","error":"EOF"}
	2025/11/01 10:10:46 WARNING: [core] [Server #3]grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2025-11-01T10:11:12.108518Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-01T10:11:12.108993Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-950389","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.40:2380"],"advertise-client-urls":["https://192.168.39.40:2379"]}
	{"level":"error","ts":"2025-11-01T10:11:12.109280Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T10:11:12.202428Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T10:11:12.202509Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:11:12.202539Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"1088a855a4aa8d0a","current-leader-member-id":"1088a855a4aa8d0a"}
	{"level":"info","ts":"2025-11-01T10:11:12.202652Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-01T10:11:12.202687Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-01T10:11:12.202972Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T10:11:12.203180Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T10:11:12.203216Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-01T10:11:12.203272Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.40:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T10:11:12.203301Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.40:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T10:11:12.203309Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.40:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:11:12.206201Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.40:2380"}
	{"level":"error","ts":"2025-11-01T10:11:12.206290Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.40:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:11:12.206334Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.40:2380"}
	{"level":"info","ts":"2025-11-01T10:11:12.206372Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-950389","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.40:2380"],"advertise-client-urls":["https://192.168.39.40:2379"]}
	
	
	==> kernel <==
	 10:18:10 up 14 min,  0 users,  load average: 0.63, 0.49, 0.34
	Linux functional-950389 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Oct 28 16:58:05 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [b5396a2c9c58870683b192a3fdf384b920eb701896ab62edc1c65efb081a3154] <==
	I1101 10:11:34.675834       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 10:11:34.679471       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 10:11:34.674692       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 10:11:34.693006       1 policy_source.go:240] refreshing policies
	I1101 10:11:34.693738       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:11:34.716576       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1101 10:11:34.744709       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 10:11:35.480463       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:11:36.463365       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:11:36.515464       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 10:11:36.570550       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:11:36.583288       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:11:37.941140       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:11:38.189636       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:11:55.076042       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.104.65.198"}
	I1101 10:11:59.787637       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.107.182.210"}
	I1101 10:11:59.833818       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 10:12:02.120377       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.100.78.168"}
	I1101 10:12:16.031264       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.111.83.65"}
	E1101 10:12:20.371618       1 conn.go:339] Error on socket receive: read tcp 192.168.39.40:8441->192.168.39.1:47384: use of closed network connection
	E1101 10:12:21.465827       1 conn.go:339] Error on socket receive: read tcp 192.168.39.40:8441->192.168.39.1:47396: use of closed network connection
	E1101 10:12:23.108237       1 conn.go:339] Error on socket receive: read tcp 192.168.39.40:8441->192.168.39.1:47410: use of closed network connection
	I1101 10:14:12.065414       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 10:14:12.302789       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.234.201"}
	I1101 10:14:12.371558       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.63.107"}
	
	
	==> kube-controller-manager [35509da8a528e42be0f838e8fafb95aecf866da7c5e6b7a3463389d69257be53] <==
	I1101 10:10:50.913409       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 10:10:50.913448       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 10:10:50.913471       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 10:10:50.913519       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 10:10:50.913525       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 10:10:50.913881       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1101 10:10:50.926331       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 10:10:50.929618       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="functional-950389" podCIDRs=["10.244.0.0/24"]
	I1101 10:10:50.941302       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:10:50.942033       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:10:50.944162       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:10:50.952149       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 10:10:50.952184       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 10:10:50.953662       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 10:10:50.953913       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 10:10:50.955705       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 10:10:50.955023       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 10:10:50.956044       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 10:10:50.955035       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 10:10:50.955053       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 10:10:50.956890       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 10:10:50.956977       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 10:10:50.957028       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-950389"
	I1101 10:10:50.957054       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 10:10:50.959752       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	
	
	==> kube-controller-manager [e90b96d4e97286fd540c2d19181b1868c42d2dab4d68fa696bfc7108af3a7c77] <==
	I1101 10:11:37.986823       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 10:11:37.986886       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 10:11:37.986947       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 10:11:37.986998       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 10:11:37.989290       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 10:11:37.990575       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 10:11:37.993905       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 10:11:37.993951       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 10:11:37.994110       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 10:11:37.994161       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-950389"
	I1101 10:11:37.994218       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 10:11:37.998608       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 10:11:38.005600       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1101 10:11:38.006324       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 10:11:38.013038       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 10:11:38.016501       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	E1101 10:14:12.164598       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 10:14:12.179617       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 10:14:12.185434       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 10:14:12.191012       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 10:14:12.200187       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 10:14:12.204924       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 10:14:12.208759       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 10:14:12.223004       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 10:14:12.223880       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [0bea04e4d0bfb829facb73a10dc15c1004846f241b01c955bda0363f2503928b] <==
	I1101 10:11:36.117752       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:11:36.219368       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:11:36.219424       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.40"]
	E1101 10:11:36.219530       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:11:36.305697       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1101 10:11:36.305750       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 10:11:36.305787       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:11:36.338703       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:11:36.339488       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:11:36.339577       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:11:36.348605       1 config.go:200] "Starting service config controller"
	I1101 10:11:36.348887       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:11:36.350863       1 config.go:309] "Starting node config controller"
	I1101 10:11:36.353887       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:11:36.353899       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:11:36.352809       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:11:36.353905       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:11:36.352796       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:11:36.362327       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:11:36.449909       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:11:36.455219       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:11:36.465605       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [62537ebd8ec8cfb85842acc4c972c6f4c2e963731421d69f8dd4ef7d38a28f75] <==
	I1101 10:10:52.788695       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:10:52.892404       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:10:52.892608       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.40"]
	E1101 10:10:52.893113       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:10:53.042226       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1101 10:10:53.042394       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 10:10:53.042428       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:10:53.079282       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:10:53.082808       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:10:53.082825       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:10:53.089905       1 config.go:200] "Starting service config controller"
	I1101 10:10:53.089918       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:10:53.089930       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:10:53.089933       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:10:53.089940       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:10:53.089944       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:10:53.097801       1 config.go:309] "Starting node config controller"
	I1101 10:10:53.097813       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:10:53.097819       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:10:53.190556       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:10:53.190589       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:10:53.190606       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [186f819959a1348c2af4e54006dff0c66c8117b0b9eb535946854568b1f2ca15] <==
	I1101 10:11:33.157776       1 serving.go:386] Generated self-signed cert in-memory
	W1101 10:11:34.552390       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 10:11:34.552487       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 10:11:34.552513       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 10:11:34.552535       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 10:11:34.599177       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:11:34.599216       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:11:34.604231       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:11:34.604272       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:11:34.606242       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:11:34.606469       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:11:34.706285       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [39af2ac349d69d2fd9b2dcd027c4c4e1bb63a93721f12f8f0dfc9945567c869c] <==
	E1101 10:10:44.508024       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 10:10:44.508189       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 10:10:44.508313       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 10:10:44.508437       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 10:10:44.508575       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 10:10:44.508764       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 10:10:44.508894       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 10:10:44.509295       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 10:10:44.509651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 10:10:44.509662       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 10:10:44.509740       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 10:10:44.509795       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 10:10:44.509871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 10:10:44.510191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 10:10:44.510224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 10:10:44.510349       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 10:10:45.312670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 10:10:45.669724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1101 10:10:48.397806       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:11:12.107394       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1101 10:11:12.107447       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1101 10:11:12.114303       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1101 10:11:12.114823       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:11:12.118329       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1101 10:11:12.118374       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 01 10:17:10 functional-950389 kubelet[14397]: E1101 10:17:10.911789   14397 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761992230910727660  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:194849}  inodes_used:{value:92}}"
	Nov 01 10:17:10 functional-950389 kubelet[14397]: E1101 10:17:10.911810   14397 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761992230910727660  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:194849}  inodes_used:{value:92}}"
	Nov 01 10:17:20 functional-950389 kubelet[14397]: E1101 10:17:20.917712   14397 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761992240914971542  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:194849}  inodes_used:{value:92}}"
	Nov 01 10:17:20 functional-950389 kubelet[14397]: E1101 10:17:20.917740   14397 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761992240914971542  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:194849}  inodes_used:{value:92}}"
	Nov 01 10:17:30 functional-950389 kubelet[14397]: E1101 10:17:30.811580   14397 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod437a47af-7662-481d-b1b7-09379f4069c9/crio-68f5ead72a631362f93ee11bdee1d8769a1750005f9dc29ecaa9a9dda9915446: Error finding container 68f5ead72a631362f93ee11bdee1d8769a1750005f9dc29ecaa9a9dda9915446: Status 404 returned error can't find the container with id 68f5ead72a631362f93ee11bdee1d8769a1750005f9dc29ecaa9a9dda9915446
	Nov 01 10:17:30 functional-950389 kubelet[14397]: E1101 10:17:30.812354   14397 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod5ef73b8d782106f4ce68a921abfa7e79/crio-1e98d8cc22231bc02a76b8b9c2987dbeb6c87d700ccceadb0052cc40b4b50f66: Error finding container 1e98d8cc22231bc02a76b8b9c2987dbeb6c87d700ccceadb0052cc40b4b50f66: Status 404 returned error can't find the container with id 1e98d8cc22231bc02a76b8b9c2987dbeb6c87d700ccceadb0052cc40b4b50f66
	Nov 01 10:17:30 functional-950389 kubelet[14397]: E1101 10:17:30.812659   14397 manager.go:1116] Failed to create existing container: /kubepods/burstable/poda502e626-8a66-4687-9b76-053029dabdd6/crio-6554bb887a4e46ae587ab3ce51abeef07f3943cca0a6a8c95ee7d822cf987118: Error finding container 6554bb887a4e46ae587ab3ce51abeef07f3943cca0a6a8c95ee7d822cf987118: Status 404 returned error can't find the container with id 6554bb887a4e46ae587ab3ce51abeef07f3943cca0a6a8c95ee7d822cf987118
	Nov 01 10:17:30 functional-950389 kubelet[14397]: E1101 10:17:30.813014   14397 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod7c9758ea-cd15-49e2-893c-e78ed7d30f55/crio-d9b89241d322c09b20ad49ff28f27f36d53287ba1b6d7a58950a03a0850382b5: Error finding container d9b89241d322c09b20ad49ff28f27f36d53287ba1b6d7a58950a03a0850382b5: Status 404 returned error can't find the container with id d9b89241d322c09b20ad49ff28f27f36d53287ba1b6d7a58950a03a0850382b5
	Nov 01 10:17:30 functional-950389 kubelet[14397]: E1101 10:17:30.813629   14397 manager.go:1116] Failed to create existing container: /kubepods/besteffort/poda0c48c32-fe99-40bf-b651-b04105adec6b/crio-17b51286c59a8bc3eced4325f549d7a6976f82c95cf65ed363eff1520f95a85f: Error finding container 17b51286c59a8bc3eced4325f549d7a6976f82c95cf65ed363eff1520f95a85f: Status 404 returned error can't find the container with id 17b51286c59a8bc3eced4325f549d7a6976f82c95cf65ed363eff1520f95a85f
	Nov 01 10:17:30 functional-950389 kubelet[14397]: E1101 10:17:30.813895   14397 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod832ac4e926fa9d3ad2ccc452d513f863/crio-a312f42dd95c1bbe80d4e02d4b83b3aebca6f86c6b1ee0e6490bd5c23dea3af7: Error finding container a312f42dd95c1bbe80d4e02d4b83b3aebca6f86c6b1ee0e6490bd5c23dea3af7: Status 404 returned error can't find the container with id a312f42dd95c1bbe80d4e02d4b83b3aebca6f86c6b1ee0e6490bd5c23dea3af7
	Nov 01 10:17:30 functional-950389 kubelet[14397]: E1101 10:17:30.814349   14397 manager.go:1116] Failed to create existing container: /kubepods/burstable/podcfd96429d7f7575fe65285b5903ca594/crio-2c85f617f556b93f76c665875b78b587e1dd47cee508740e24275ba1ea021bf9: Error finding container 2c85f617f556b93f76c665875b78b587e1dd47cee508740e24275ba1ea021bf9: Status 404 returned error can't find the container with id 2c85f617f556b93f76c665875b78b587e1dd47cee508740e24275ba1ea021bf9
	Nov 01 10:17:30 functional-950389 kubelet[14397]: E1101 10:17:30.920217   14397 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761992250919750479  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:194849}  inodes_used:{value:92}}"
	Nov 01 10:17:30 functional-950389 kubelet[14397]: E1101 10:17:30.920245   14397 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761992250919750479  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:194849}  inodes_used:{value:92}}"
	Nov 01 10:17:40 functional-950389 kubelet[14397]: E1101 10:17:40.923352   14397 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761992260922632245  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:194849}  inodes_used:{value:92}}"
	Nov 01 10:17:40 functional-950389 kubelet[14397]: E1101 10:17:40.923402   14397 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761992260922632245  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:194849}  inodes_used:{value:92}}"
	Nov 01 10:17:43 functional-950389 kubelet[14397]: E1101 10:17:43.125991   14397 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Nov 01 10:17:43 functional-950389 kubelet[14397]: E1101 10:17:43.126045   14397 kuberuntime_image.go:43] "Failed to pull image" err="fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="kicbase/echo-server:latest"
	Nov 01 10:17:43 functional-950389 kubelet[14397]: E1101 10:17:43.126428   14397 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-connect-7d85dfc575-t7gtf_default(e6aa5eba-2bc1-4f18-9d27-1e0bc284884d): ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 01 10:17:43 functional-950389 kubelet[14397]: E1101 10:17:43.126582   14397 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-t7gtf" podUID="e6aa5eba-2bc1-4f18-9d27-1e0bc284884d"
	Nov 01 10:17:50 functional-950389 kubelet[14397]: E1101 10:17:50.927758   14397 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761992270926515756  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:194849}  inodes_used:{value:92}}"
	Nov 01 10:17:50 functional-950389 kubelet[14397]: E1101 10:17:50.927929   14397 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761992270926515756  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:194849}  inodes_used:{value:92}}"
	Nov 01 10:17:57 functional-950389 kubelet[14397]: E1101 10:17:57.707804   14397 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-t7gtf" podUID="e6aa5eba-2bc1-4f18-9d27-1e0bc284884d"
	Nov 01 10:18:00 functional-950389 kubelet[14397]: E1101 10:18:00.931411   14397 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761992280929877208  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:194849}  inodes_used:{value:92}}"
	Nov 01 10:18:00 functional-950389 kubelet[14397]: E1101 10:18:00.931455   14397 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761992280929877208  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:194849}  inodes_used:{value:92}}"
	Nov 01 10:18:10 functional-950389 kubelet[14397]: E1101 10:18:10.708503   14397 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/hello-node-connect-7d85dfc575-t7gtf" podUID="e6aa5eba-2bc1-4f18-9d27-1e0bc284884d"
	
	
	==> storage-provisioner [273ca0fdb8a91fa3e741bb6943c95a6dfb8153f908ad4f5a083bd2881a67b410] <==
	W1101 10:17:46.076759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:17:48.080737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:17:48.085616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:17:50.089420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:17:50.096848       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:17:52.100705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:17:52.108688       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:17:54.112608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:17:54.118605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:17:56.122260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:17:56.131853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:17:58.135911       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:17:58.145517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:18:00.149396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:18:00.155340       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:18:02.158880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:18:02.164032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:18:04.168216       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:18:04.172958       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:18:06.176769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:18:06.182208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:18:08.185219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:18:08.194751       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:18:10.198847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:18:10.205427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [a2f46ea37f76ee460b92d3ba6f03808bc8727f53b3a42d9893fdf5c315adca62] <==
	W1101 10:10:53.971607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:10:53.971757       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 10:10:53.972439       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cc693950-c77e-4542-ade3-eb86356b8127", APIVersion:"v1", ResourceVersion:"374", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-950389_edc0ea9e-5089-482e-a4f3-2ad82dd73b48 became leader
	I1101 10:10:53.972524       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-950389_edc0ea9e-5089-482e-a4f3-2ad82dd73b48!
	W1101 10:10:53.976036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:10:53.986630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 10:10:54.073762       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-950389_edc0ea9e-5089-482e-a4f3-2ad82dd73b48!
	W1101 10:10:55.990039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:10:55.995479       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:10:57.999172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:10:58.005427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:11:00.008624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:11:00.014243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:11:02.018143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:11:02.022795       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:11:04.030153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:11:04.037311       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:11:06.041900       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:11:06.047277       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:11:08.051696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:11:08.062794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:11:10.069361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:11:10.086725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:11:12.091000       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 10:11:12.096762       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-950389 -n functional-950389
helpers_test.go:269: (dbg) Run:  kubectl --context functional-950389 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-wws2s hello-node-connect-7d85dfc575-t7gtf sp-pod dashboard-metrics-scraper-77bf4d6c4c-ljjhn kubernetes-dashboard-855c9754f9-wm424
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-950389 describe pod busybox-mount hello-node-75c85bcc94-wws2s hello-node-connect-7d85dfc575-t7gtf sp-pod dashboard-metrics-scraper-77bf4d6c4c-ljjhn kubernetes-dashboard-855c9754f9-wm424
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-950389 describe pod busybox-mount hello-node-75c85bcc94-wws2s hello-node-connect-7d85dfc575-t7gtf sp-pod dashboard-metrics-scraper-77bf4d6c4c-ljjhn kubernetes-dashboard-855c9754f9-wm424: exit status 1 (93.198883ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-950389/192.168.39.40
	Start Time:       Sat, 01 Nov 2025 10:12:24 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.15
	IPs:
	  IP:  10.244.0.15
	Containers:
	  mount-munger:
	    Container ID:  cri-o://c2e6bc4021502c2f170d620789895f85434175172b836be9f091032da1f39fa8
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 01 Nov 2025 10:14:05 +0000
	      Finished:     Sat, 01 Nov 2025 10:14:05 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2nqsv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-2nqsv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m47s  default-scheduler  Successfully assigned default/busybox-mount to functional-950389
	  Normal  Pulling    5m46s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     4m6s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 3.385s (1m40.236s including waiting). Image size: 4631262 bytes.
	  Normal  Created    4m6s   kubelet            Created container: mount-munger
	  Normal  Started    4m6s   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-wws2s
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-950389/192.168.39.40
	Start Time:       Sat, 01 Nov 2025 10:12:15 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.14
	IPs:
	  IP:           10.244.0.14
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zqfqw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-zqfqw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  5m56s                default-scheduler  Successfully assigned default/hello-node-75c85bcc94-wws2s to functional-950389
	  Warning  Failed     74s (x2 over 4m10s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     74s (x2 over 4m10s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    63s (x2 over 4m9s)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     63s (x2 over 4m9s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    52s (x3 over 5m55s)  kubelet            Pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-t7gtf
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-950389/192.168.39.40
	Start Time:       Sat, 01 Nov 2025 10:12:02 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.12
	IPs:
	  IP:           10.244.0.12
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b7w5p (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-b7w5p:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  6m9s                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-t7gtf to functional-950389
	  Warning  Failed     3m35s                 kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m10s (x3 over 6m9s)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     28s (x2 over 5m12s)   kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     28s (x3 over 5m12s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    1s (x4 over 5m12s)    kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     1s (x4 over 5m12s)    kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-950389/192.168.39.40
	Start Time:       Sat, 01 Nov 2025 10:12:08 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.13
	IPs:
	  IP:  10.244.0.13
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pkznd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-pkznd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m3s                   default-scheduler  Successfully assigned default/sp-pod to functional-950389
	  Warning  Failed     4m41s                  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m49s (x2 over 4m41s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m49s                  kubelet            Failed to pull image "docker.io/nginx": fetching target platform image selected from image index: reading manifest sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58 in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    2m38s (x2 over 4m40s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     2m38s (x2 over 4m40s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    2m25s (x3 over 5m58s)  kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-ljjhn" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-wm424" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-950389 describe pod busybox-mount hello-node-75c85bcc94-wws2s hello-node-connect-7d85dfc575-t7gtf sp-pod dashboard-metrics-scraper-77bf4d6c4c-ljjhn kubernetes-dashboard-855c9754f9-wm424: exit status 1
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (370.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-950389 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-950389 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-wws2s" [d32fd7e0-500b-4734-88ed-9a2fdbad7f04] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-950389 -n functional-950389
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-01 10:22:16.280109488 +0000 UTC m=+1951.022317470
functional_test.go:1460: (dbg) Run:  kubectl --context functional-950389 describe po hello-node-75c85bcc94-wws2s -n default
functional_test.go:1460: (dbg) kubectl --context functional-950389 describe po hello-node-75c85bcc94-wws2s -n default:
Name:             hello-node-75c85bcc94-wws2s
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-950389/192.168.39.40
Start Time:       Sat, 01 Nov 2025 10:12:15 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.14
IPs:
IP:           10.244.0.14
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zqfqw (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-zqfqw:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/hello-node-75c85bcc94-wws2s to functional-950389
Warning  Failed     5m19s (x2 over 8m15s)  kubelet            Failed to pull image "kicbase/echo-server": reading manifest latest in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     117s (x3 over 8m15s)   kubelet            Error: ErrImagePull
Warning  Failed     117s                   kubelet            Failed to pull image "kicbase/echo-server": fetching target platform image selected from manifest list: reading manifest sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 in docker.io/kicbase/echo-server: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    81s (x5 over 8m14s)    kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     81s (x5 over 8m14s)    kubelet            Error: ImagePullBackOff
Normal   Pulling    70s (x4 over 10m)      kubelet            Pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-950389 logs hello-node-75c85bcc94-wws2s -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-950389 logs hello-node-75c85bcc94-wws2s -n default: exit status 1 (65.754364ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-wws2s" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-950389 logs hello-node-75c85bcc94-wws2s -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-950389 service --namespace=default --https --url hello-node: exit status 115 (250.58891ms)

                                                
                                                
-- stdout --
	https://192.168.39.40:32213
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-950389 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-950389 service hello-node --url --format={{.IP}}: exit status 115 (252.289937ms)

                                                
                                                
-- stdout --
	192.168.39.40
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-950389 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-950389 service hello-node --url: exit status 115 (253.354599ms)

                                                
                                                
-- stdout --
	http://192.168.39.40:32213
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-950389 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.40:32213
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.25s)

                                                
                                    
x
+
TestPreload (158.07s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-401855 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-401855 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (1m34.035652579s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-401855 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-401855 image pull gcr.io/k8s-minikube/busybox: (3.75299238s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-401855
E1101 11:01:59.847003   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/functional-950389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-401855: (7.008127727s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-401855 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E1101 11:02:29.153756   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-401855 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (50.458061136s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-401855 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-11-01 11:02:54.745974822 +0000 UTC m=+4389.488182799
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-401855 -n test-preload-401855
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-401855 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-401855 logs -n 25: (1.108758491s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-456313 ssh -n multinode-456313-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-456313     │ jenkins │ v1.37.0 │ 01 Nov 25 10:49 UTC │ 01 Nov 25 10:49 UTC │
	│ ssh     │ multinode-456313 ssh -n multinode-456313 sudo cat /home/docker/cp-test_multinode-456313-m03_multinode-456313.txt                                          │ multinode-456313     │ jenkins │ v1.37.0 │ 01 Nov 25 10:49 UTC │ 01 Nov 25 10:49 UTC │
	│ cp      │ multinode-456313 cp multinode-456313-m03:/home/docker/cp-test.txt multinode-456313-m02:/home/docker/cp-test_multinode-456313-m03_multinode-456313-m02.txt │ multinode-456313     │ jenkins │ v1.37.0 │ 01 Nov 25 10:49 UTC │ 01 Nov 25 10:49 UTC │
	│ ssh     │ multinode-456313 ssh -n multinode-456313-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-456313     │ jenkins │ v1.37.0 │ 01 Nov 25 10:49 UTC │ 01 Nov 25 10:49 UTC │
	│ ssh     │ multinode-456313 ssh -n multinode-456313-m02 sudo cat /home/docker/cp-test_multinode-456313-m03_multinode-456313-m02.txt                                  │ multinode-456313     │ jenkins │ v1.37.0 │ 01 Nov 25 10:49 UTC │ 01 Nov 25 10:49 UTC │
	│ node    │ multinode-456313 node stop m03                                                                                                                            │ multinode-456313     │ jenkins │ v1.37.0 │ 01 Nov 25 10:49 UTC │ 01 Nov 25 10:49 UTC │
	│ node    │ multinode-456313 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-456313     │ jenkins │ v1.37.0 │ 01 Nov 25 10:49 UTC │ 01 Nov 25 10:49 UTC │
	│ node    │ list -p multinode-456313                                                                                                                                  │ multinode-456313     │ jenkins │ v1.37.0 │ 01 Nov 25 10:49 UTC │                     │
	│ stop    │ -p multinode-456313                                                                                                                                       │ multinode-456313     │ jenkins │ v1.37.0 │ 01 Nov 25 10:49 UTC │ 01 Nov 25 10:52 UTC │
	│ start   │ -p multinode-456313 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-456313     │ jenkins │ v1.37.0 │ 01 Nov 25 10:52 UTC │ 01 Nov 25 10:55 UTC │
	│ node    │ list -p multinode-456313                                                                                                                                  │ multinode-456313     │ jenkins │ v1.37.0 │ 01 Nov 25 10:55 UTC │                     │
	│ node    │ multinode-456313 node delete m03                                                                                                                          │ multinode-456313     │ jenkins │ v1.37.0 │ 01 Nov 25 10:55 UTC │ 01 Nov 25 10:55 UTC │
	│ stop    │ multinode-456313 stop                                                                                                                                     │ multinode-456313     │ jenkins │ v1.37.0 │ 01 Nov 25 10:55 UTC │ 01 Nov 25 10:57 UTC │
	│ start   │ -p multinode-456313 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-456313     │ jenkins │ v1.37.0 │ 01 Nov 25 10:57 UTC │ 01 Nov 25 10:59 UTC │
	│ node    │ list -p multinode-456313                                                                                                                                  │ multinode-456313     │ jenkins │ v1.37.0 │ 01 Nov 25 10:59 UTC │                     │
	│ start   │ -p multinode-456313-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-456313-m02 │ jenkins │ v1.37.0 │ 01 Nov 25 10:59 UTC │                     │
	│ start   │ -p multinode-456313-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-456313-m03 │ jenkins │ v1.37.0 │ 01 Nov 25 10:59 UTC │ 01 Nov 25 11:00 UTC │
	│ node    │ add -p multinode-456313                                                                                                                                   │ multinode-456313     │ jenkins │ v1.37.0 │ 01 Nov 25 11:00 UTC │                     │
	│ delete  │ -p multinode-456313-m03                                                                                                                                   │ multinode-456313-m03 │ jenkins │ v1.37.0 │ 01 Nov 25 11:00 UTC │ 01 Nov 25 11:00 UTC │
	│ delete  │ -p multinode-456313                                                                                                                                       │ multinode-456313     │ jenkins │ v1.37.0 │ 01 Nov 25 11:00 UTC │ 01 Nov 25 11:00 UTC │
	│ start   │ -p test-preload-401855 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0   │ test-preload-401855  │ jenkins │ v1.37.0 │ 01 Nov 25 11:00 UTC │ 01 Nov 25 11:01 UTC │
	│ image   │ test-preload-401855 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-401855  │ jenkins │ v1.37.0 │ 01 Nov 25 11:01 UTC │ 01 Nov 25 11:01 UTC │
	│ stop    │ -p test-preload-401855                                                                                                                                    │ test-preload-401855  │ jenkins │ v1.37.0 │ 01 Nov 25 11:01 UTC │ 01 Nov 25 11:02 UTC │
	│ start   │ -p test-preload-401855 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                           │ test-preload-401855  │ jenkins │ v1.37.0 │ 01 Nov 25 11:02 UTC │ 01 Nov 25 11:02 UTC │
	│ image   │ test-preload-401855 image list                                                                                                                            │ test-preload-401855  │ jenkins │ v1.37.0 │ 01 Nov 25 11:02 UTC │ 01 Nov 25 11:02 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 11:02:04
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 11:02:04.150873  101116 out.go:360] Setting OutFile to fd 1 ...
	I1101 11:02:04.151101  101116 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:02:04.151120  101116 out.go:374] Setting ErrFile to fd 2...
	I1101 11:02:04.151123  101116 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:02:04.151335  101116 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-70113/.minikube/bin
	I1101 11:02:04.151746  101116 out.go:368] Setting JSON to false
	I1101 11:02:04.152586  101116 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":9872,"bootTime":1761985052,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 11:02:04.152689  101116 start.go:143] virtualization: kvm guest
	I1101 11:02:04.154654  101116 out.go:179] * [test-preload-401855] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 11:02:04.155914  101116 notify.go:221] Checking for updates...
	I1101 11:02:04.155939  101116 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 11:02:04.157333  101116 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 11:02:04.158552  101116 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-70113/kubeconfig
	I1101 11:02:04.159817  101116 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-70113/.minikube
	I1101 11:02:04.160994  101116 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 11:02:04.162035  101116 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 11:02:04.163489  101116 config.go:182] Loaded profile config "test-preload-401855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1101 11:02:04.165063  101116 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1101 11:02:04.166301  101116 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 11:02:04.200066  101116 out.go:179] * Using the kvm2 driver based on existing profile
	I1101 11:02:04.201596  101116 start.go:309] selected driver: kvm2
	I1101 11:02:04.201615  101116 start.go:930] validating driver "kvm2" against &{Name:test-preload-401855 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-401855 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.101 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:02:04.201738  101116 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 11:02:04.203006  101116 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 11:02:04.203049  101116 cni.go:84] Creating CNI manager for ""
	I1101 11:02:04.203101  101116 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 11:02:04.203148  101116 start.go:353] cluster config:
	{Name:test-preload-401855 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-401855 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.101 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:02:04.203235  101116 iso.go:125] acquiring lock: {Name:mk49d9a272bb99d336f82dfc5631a4c8ce9271c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 11:02:04.204825  101116 out.go:179] * Starting "test-preload-401855" primary control-plane node in "test-preload-401855" cluster
	I1101 11:02:04.205965  101116 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1101 11:02:04.231005  101116 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1101 11:02:04.231040  101116 cache.go:59] Caching tarball of preloaded images
	I1101 11:02:04.231201  101116 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1101 11:02:04.232967  101116 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1101 11:02:04.233995  101116 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1101 11:02:04.260550  101116 preload.go:290] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1101 11:02:04.260604  101116 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21830-70113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1101 11:02:07.183801  101116 cache.go:62] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1101 11:02:07.183954  101116 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/test-preload-401855/config.json ...
	I1101 11:02:07.184200  101116 start.go:360] acquireMachinesLock for test-preload-401855: {Name:mk53a05d125fe91ead2a39c6bbf2ba926c471e2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 11:02:07.184279  101116 start.go:364] duration metric: took 48.884µs to acquireMachinesLock for "test-preload-401855"
	I1101 11:02:07.184303  101116 start.go:96] Skipping create...Using existing machine configuration
	I1101 11:02:07.184314  101116 fix.go:54] fixHost starting: 
	I1101 11:02:07.185938  101116 fix.go:112] recreateIfNeeded on test-preload-401855: state=Stopped err=<nil>
	W1101 11:02:07.185973  101116 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 11:02:07.187936  101116 out.go:252] * Restarting existing kvm2 VM for "test-preload-401855" ...
	I1101 11:02:07.187970  101116 main.go:143] libmachine: starting domain...
	I1101 11:02:07.187981  101116 main.go:143] libmachine: ensuring networks are active...
	I1101 11:02:07.188671  101116 main.go:143] libmachine: Ensuring network default is active
	I1101 11:02:07.188988  101116 main.go:143] libmachine: Ensuring network mk-test-preload-401855 is active
	I1101 11:02:07.189307  101116 main.go:143] libmachine: getting domain XML...
	I1101 11:02:07.190321  101116 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-401855</name>
	  <uuid>48a98845-257d-4857-859e-dba177598460</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21830-70113/.minikube/machines/test-preload-401855/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21830-70113/.minikube/machines/test-preload-401855/test-preload-401855.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:8e:17:26'/>
	      <source network='mk-test-preload-401855'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:aa:c7:84'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1101 11:02:08.441329  101116 main.go:143] libmachine: waiting for domain to start...
	I1101 11:02:08.442605  101116 main.go:143] libmachine: domain is now running
	I1101 11:02:08.442625  101116 main.go:143] libmachine: waiting for IP...
	I1101 11:02:08.443374  101116 main.go:143] libmachine: domain test-preload-401855 has defined MAC address 52:54:00:8e:17:26 in network mk-test-preload-401855
	I1101 11:02:08.443818  101116 main.go:143] libmachine: domain test-preload-401855 has current primary IP address 192.168.39.101 and MAC address 52:54:00:8e:17:26 in network mk-test-preload-401855
	I1101 11:02:08.443828  101116 main.go:143] libmachine: found domain IP: 192.168.39.101
	I1101 11:02:08.443833  101116 main.go:143] libmachine: reserving static IP address...
	I1101 11:02:08.444515  101116 main.go:143] libmachine: found host DHCP lease matching {name: "test-preload-401855", mac: "52:54:00:8e:17:26", ip: "192.168.39.101"} in network mk-test-preload-401855: {Iface:virbr1 ExpiryTime:2025-11-01 12:00:35 +0000 UTC Type:0 Mac:52:54:00:8e:17:26 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-401855 Clientid:01:52:54:00:8e:17:26}
	I1101 11:02:08.444560  101116 main.go:143] libmachine: skip adding static IP to network mk-test-preload-401855 - found existing host DHCP lease matching {name: "test-preload-401855", mac: "52:54:00:8e:17:26", ip: "192.168.39.101"}
	I1101 11:02:08.444576  101116 main.go:143] libmachine: reserved static IP address 192.168.39.101 for domain test-preload-401855
	I1101 11:02:08.444582  101116 main.go:143] libmachine: waiting for SSH...
	I1101 11:02:08.444588  101116 main.go:143] libmachine: Getting to WaitForSSH function...
	I1101 11:02:08.446686  101116 main.go:143] libmachine: domain test-preload-401855 has defined MAC address 52:54:00:8e:17:26 in network mk-test-preload-401855
	I1101 11:02:08.446986  101116 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8e:17:26", ip: ""} in network mk-test-preload-401855: {Iface:virbr1 ExpiryTime:2025-11-01 12:00:35 +0000 UTC Type:0 Mac:52:54:00:8e:17:26 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-401855 Clientid:01:52:54:00:8e:17:26}
	I1101 11:02:08.447006  101116 main.go:143] libmachine: domain test-preload-401855 has defined IP address 192.168.39.101 and MAC address 52:54:00:8e:17:26 in network mk-test-preload-401855
	I1101 11:02:08.447144  101116 main.go:143] libmachine: Using SSH client type: native
	I1101 11:02:08.447408  101116 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I1101 11:02:08.447422  101116 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1101 11:02:11.513722  101116 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.101:22: connect: no route to host
	I1101 11:02:17.592856  101116 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.101:22: connect: no route to host
	I1101 11:02:20.597064  101116 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.101:22: connect: connection refused
	I1101 11:02:23.702432  101116 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 11:02:23.706056  101116 main.go:143] libmachine: domain test-preload-401855 has defined MAC address 52:54:00:8e:17:26 in network mk-test-preload-401855
	I1101 11:02:23.706492  101116 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8e:17:26", ip: ""} in network mk-test-preload-401855: {Iface:virbr1 ExpiryTime:2025-11-01 12:02:19 +0000 UTC Type:0 Mac:52:54:00:8e:17:26 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-401855 Clientid:01:52:54:00:8e:17:26}
	I1101 11:02:23.706525  101116 main.go:143] libmachine: domain test-preload-401855 has defined IP address 192.168.39.101 and MAC address 52:54:00:8e:17:26 in network mk-test-preload-401855
	I1101 11:02:23.706820  101116 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/test-preload-401855/config.json ...
	I1101 11:02:23.707068  101116 machine.go:94] provisionDockerMachine start ...
	I1101 11:02:23.709173  101116 main.go:143] libmachine: domain test-preload-401855 has defined MAC address 52:54:00:8e:17:26 in network mk-test-preload-401855
	I1101 11:02:23.709499  101116 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8e:17:26", ip: ""} in network mk-test-preload-401855: {Iface:virbr1 ExpiryTime:2025-11-01 12:02:19 +0000 UTC Type:0 Mac:52:54:00:8e:17:26 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-401855 Clientid:01:52:54:00:8e:17:26}
	I1101 11:02:23.709518  101116 main.go:143] libmachine: domain test-preload-401855 has defined IP address 192.168.39.101 and MAC address 52:54:00:8e:17:26 in network mk-test-preload-401855
	I1101 11:02:23.709663  101116 main.go:143] libmachine: Using SSH client type: native
	I1101 11:02:23.709884  101116 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I1101 11:02:23.709895  101116 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 11:02:23.812915  101116 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1101 11:02:23.812961  101116 buildroot.go:166] provisioning hostname "test-preload-401855"
	I1101 11:02:23.815668  101116 main.go:143] libmachine: domain test-preload-401855 has defined MAC address 52:54:00:8e:17:26 in network mk-test-preload-401855
	I1101 11:02:23.816132  101116 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8e:17:26", ip: ""} in network mk-test-preload-401855: {Iface:virbr1 ExpiryTime:2025-11-01 12:02:19 +0000 UTC Type:0 Mac:52:54:00:8e:17:26 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-401855 Clientid:01:52:54:00:8e:17:26}
	I1101 11:02:23.816167  101116 main.go:143] libmachine: domain test-preload-401855 has defined IP address 192.168.39.101 and MAC address 52:54:00:8e:17:26 in network mk-test-preload-401855
	I1101 11:02:23.816365  101116 main.go:143] libmachine: Using SSH client type: native
	I1101 11:02:23.816647  101116 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I1101 11:02:23.816665  101116 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-401855 && echo "test-preload-401855" | sudo tee /etc/hostname
	I1101 11:02:23.937032  101116 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-401855
	
	I1101 11:02:23.939936  101116 main.go:143] libmachine: domain test-preload-401855 has defined MAC address 52:54:00:8e:17:26 in network mk-test-preload-401855
	I1101 11:02:23.940378  101116 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8e:17:26", ip: ""} in network mk-test-preload-401855: {Iface:virbr1 ExpiryTime:2025-11-01 12:02:19 +0000 UTC Type:0 Mac:52:54:00:8e:17:26 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-401855 Clientid:01:52:54:00:8e:17:26}
	I1101 11:02:23.940403  101116 main.go:143] libmachine: domain test-preload-401855 has defined IP address 192.168.39.101 and MAC address 52:54:00:8e:17:26 in network mk-test-preload-401855
	I1101 11:02:23.940596  101116 main.go:143] libmachine: Using SSH client type: native
	I1101 11:02:23.940829  101116 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I1101 11:02:23.940851  101116 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-401855' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-401855/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-401855' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 11:02:24.052751  101116 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 11:02:24.052785  101116 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21830-70113/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-70113/.minikube}
	I1101 11:02:24.052821  101116 buildroot.go:174] setting up certificates
	I1101 11:02:24.052831  101116 provision.go:84] configureAuth start
	I1101 11:02:24.055722  101116 main.go:143] libmachine: domain test-preload-401855 has defined MAC address 52:54:00:8e:17:26 in network mk-test-preload-401855
	I1101 11:02:24.056082  101116 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8e:17:26", ip: ""} in network mk-test-preload-401855: {Iface:virbr1 ExpiryTime:2025-11-01 12:02:19 +0000 UTC Type:0 Mac:52:54:00:8e:17:26 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-401855 Clientid:01:52:54:00:8e:17:26}
	I1101 11:02:24.056102  101116 main.go:143] libmachine: domain test-preload-401855 has defined IP address 192.168.39.101 and MAC address 52:54:00:8e:17:26 in network mk-test-preload-401855
	I1101 11:02:24.058452  101116 main.go:143] libmachine: domain test-preload-401855 has defined MAC address 52:54:00:8e:17:26 in network mk-test-preload-401855
	I1101 11:02:24.058792  101116 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8e:17:26", ip: ""} in network mk-test-preload-401855: {Iface:virbr1 ExpiryTime:2025-11-01 12:02:19 +0000 UTC Type:0 Mac:52:54:00:8e:17:26 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-401855 Clientid:01:52:54:00:8e:17:26}
	I1101 11:02:24.058812  101116 main.go:143] libmachine: domain test-preload-401855 has defined IP address 192.168.39.101 and MAC address 52:54:00:8e:17:26 in network mk-test-preload-401855
	I1101 11:02:24.058930  101116 provision.go:143] copyHostCerts
	I1101 11:02:24.058973  101116 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-70113/.minikube/ca.pem, removing ...
	I1101 11:02:24.058992  101116 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-70113/.minikube/ca.pem
	I1101 11:02:24.059055  101116 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-70113/.minikube/ca.pem (1082 bytes)
	I1101 11:02:24.059140  101116 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-70113/.minikube/cert.pem, removing ...
	I1101 11:02:24.059148  101116 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-70113/.minikube/cert.pem
	I1101 11:02:24.059179  101116 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-70113/.minikube/cert.pem (1123 bytes)
	I1101 11:02:24.059246  101116 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-70113/.minikube/key.pem, removing ...
	I1101 11:02:24.059254  101116 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-70113/.minikube/key.pem
	I1101 11:02:24.059280  101116 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-70113/.minikube/key.pem (1675 bytes)
	I1101 11:02:24.059381  101116 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-70113/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca-key.pem org=jenkins.test-preload-401855 san=[127.0.0.1 192.168.39.101 localhost minikube test-preload-401855]
	I1101 11:02:24.452982  101116 provision.go:177] copyRemoteCerts
	I1101 11:02:24.453047  101116 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 11:02:24.455656  101116 main.go:143] libmachine: domain test-preload-401855 has defined MAC address 52:54:00:8e:17:26 in network mk-test-preload-401855
	I1101 11:02:24.456107  101116 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8e:17:26", ip: ""} in network mk-test-preload-401855: {Iface:virbr1 ExpiryTime:2025-11-01 12:02:19 +0000 UTC Type:0 Mac:52:54:00:8e:17:26 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-401855 Clientid:01:52:54:00:8e:17:26}
	I1101 11:02:24.456132  101116 main.go:143] libmachine: domain test-preload-401855 has defined IP address 192.168.39.101 and MAC address 52:54:00:8e:17:26 in network mk-test-preload-401855
	I1101 11:02:24.456264  101116 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/test-preload-401855/id_rsa Username:docker}
	I1101 11:02:24.538841  101116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 11:02:24.570032  101116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1101 11:02:24.600769  101116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 11:02:24.631598  101116 provision.go:87] duration metric: took 578.75212ms to configureAuth
	I1101 11:02:24.631626  101116 buildroot.go:189] setting minikube options for container-runtime
	I1101 11:02:24.631817  101116 config.go:182] Loaded profile config "test-preload-401855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1101 11:02:24.635012  101116 main.go:143] libmachine: domain test-preload-401855 has defined MAC address 52:54:00:8e:17:26 in network mk-test-preload-401855
	I1101 11:02:24.635475  101116 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8e:17:26", ip: ""} in network mk-test-preload-401855: {Iface:virbr1 ExpiryTime:2025-11-01 12:02:19 +0000 UTC Type:0 Mac:52:54:00:8e:17:26 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-401855 Clientid:01:52:54:00:8e:17:26}
	I1101 11:02:24.635504  101116 main.go:143] libmachine: domain test-preload-401855 has defined IP address 192.168.39.101 and MAC address 52:54:00:8e:17:26 in network mk-test-preload-401855
	I1101 11:02:24.635674  101116 main.go:143] libmachine: Using SSH client type: native
	I1101 11:02:24.635933  101116 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I1101 11:02:24.635956  101116 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 11:02:24.886644  101116 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 11:02:24.886677  101116 machine.go:97] duration metric: took 1.179591005s to provisionDockerMachine
	I1101 11:02:24.886690  101116 start.go:293] postStartSetup for "test-preload-401855" (driver="kvm2")
	I1101 11:02:24.886701  101116 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 11:02:24.886761  101116 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 11:02:24.889630  101116 main.go:143] libmachine: domain test-preload-401855 has defined MAC address 52:54:00:8e:17:26 in network mk-test-preload-401855
	I1101 11:02:24.890008  101116 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8e:17:26", ip: ""} in network mk-test-preload-401855: {Iface:virbr1 ExpiryTime:2025-11-01 12:02:19 +0000 UTC Type:0 Mac:52:54:00:8e:17:26 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-401855 Clientid:01:52:54:00:8e:17:26}
	I1101 11:02:24.890038  101116 main.go:143] libmachine: domain test-preload-401855 has defined IP address 192.168.39.101 and MAC address 52:54:00:8e:17:26 in network mk-test-preload-401855
	I1101 11:02:24.890207  101116 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/test-preload-401855/id_rsa Username:docker}
	I1101 11:02:24.973772  101116 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 11:02:24.979096  101116 info.go:137] Remote host: Buildroot 2025.02
	I1101 11:02:24.979125  101116 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-70113/.minikube/addons for local assets ...
	I1101 11:02:24.979200  101116 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-70113/.minikube/files for local assets ...
	I1101 11:02:24.979290  101116 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-70113/.minikube/files/etc/ssl/certs/739982.pem -> 739982.pem in /etc/ssl/certs
	I1101 11:02:24.979386  101116 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 11:02:24.991914  101116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/files/etc/ssl/certs/739982.pem --> /etc/ssl/certs/739982.pem (1708 bytes)
	I1101 11:02:25.027186  101116 start.go:296] duration metric: took 140.477139ms for postStartSetup
	I1101 11:02:25.027241  101116 fix.go:56] duration metric: took 17.842925948s for fixHost
	I1101 11:02:25.030439  101116 main.go:143] libmachine: domain test-preload-401855 has defined MAC address 52:54:00:8e:17:26 in network mk-test-preload-401855
	I1101 11:02:25.030929  101116 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8e:17:26", ip: ""} in network mk-test-preload-401855: {Iface:virbr1 ExpiryTime:2025-11-01 12:02:19 +0000 UTC Type:0 Mac:52:54:00:8e:17:26 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-401855 Clientid:01:52:54:00:8e:17:26}
	I1101 11:02:25.030973  101116 main.go:143] libmachine: domain test-preload-401855 has defined IP address 192.168.39.101 and MAC address 52:54:00:8e:17:26 in network mk-test-preload-401855
	I1101 11:02:25.031183  101116 main.go:143] libmachine: Using SSH client type: native
	I1101 11:02:25.031475  101116 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.101 22 <nil> <nil>}
	I1101 11:02:25.031492  101116 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1101 11:02:25.144239  101116 main.go:143] libmachine: SSH cmd err, output: <nil>: 1761994945.108510416
	
	I1101 11:02:25.144268  101116 fix.go:216] guest clock: 1761994945.108510416
	I1101 11:02:25.144278  101116 fix.go:229] Guest: 2025-11-01 11:02:25.108510416 +0000 UTC Remote: 2025-11-01 11:02:25.027247141 +0000 UTC m=+20.925109864 (delta=81.263275ms)
	I1101 11:02:25.144298  101116 fix.go:200] guest clock delta is within tolerance: 81.263275ms
	I1101 11:02:25.144304  101116 start.go:83] releasing machines lock for "test-preload-401855", held for 17.960011337s
	I1101 11:02:25.147336  101116 main.go:143] libmachine: domain test-preload-401855 has defined MAC address 52:54:00:8e:17:26 in network mk-test-preload-401855
	I1101 11:02:25.147775  101116 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8e:17:26", ip: ""} in network mk-test-preload-401855: {Iface:virbr1 ExpiryTime:2025-11-01 12:02:19 +0000 UTC Type:0 Mac:52:54:00:8e:17:26 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-401855 Clientid:01:52:54:00:8e:17:26}
	I1101 11:02:25.147802  101116 main.go:143] libmachine: domain test-preload-401855 has defined IP address 192.168.39.101 and MAC address 52:54:00:8e:17:26 in network mk-test-preload-401855
	I1101 11:02:25.148376  101116 ssh_runner.go:195] Run: cat /version.json
	I1101 11:02:25.148474  101116 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 11:02:25.151668  101116 main.go:143] libmachine: domain test-preload-401855 has defined MAC address 52:54:00:8e:17:26 in network mk-test-preload-401855
	I1101 11:02:25.151729  101116 main.go:143] libmachine: domain test-preload-401855 has defined MAC address 52:54:00:8e:17:26 in network mk-test-preload-401855
	I1101 11:02:25.152110  101116 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8e:17:26", ip: ""} in network mk-test-preload-401855: {Iface:virbr1 ExpiryTime:2025-11-01 12:02:19 +0000 UTC Type:0 Mac:52:54:00:8e:17:26 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-401855 Clientid:01:52:54:00:8e:17:26}
	I1101 11:02:25.152133  101116 main.go:143] libmachine: domain test-preload-401855 has defined IP address 192.168.39.101 and MAC address 52:54:00:8e:17:26 in network mk-test-preload-401855
	I1101 11:02:25.152158  101116 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8e:17:26", ip: ""} in network mk-test-preload-401855: {Iface:virbr1 ExpiryTime:2025-11-01 12:02:19 +0000 UTC Type:0 Mac:52:54:00:8e:17:26 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-401855 Clientid:01:52:54:00:8e:17:26}
	I1101 11:02:25.152183  101116 main.go:143] libmachine: domain test-preload-401855 has defined IP address 192.168.39.101 and MAC address 52:54:00:8e:17:26 in network mk-test-preload-401855
	I1101 11:02:25.152284  101116 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/test-preload-401855/id_rsa Username:docker}
	I1101 11:02:25.152449  101116 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/test-preload-401855/id_rsa Username:docker}
	I1101 11:02:25.228768  101116 ssh_runner.go:195] Run: systemctl --version
	I1101 11:02:25.254488  101116 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 11:02:25.400913  101116 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 11:02:25.409006  101116 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 11:02:25.409100  101116 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 11:02:25.430976  101116 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 11:02:25.431010  101116 start.go:496] detecting cgroup driver to use...
	I1101 11:02:25.431081  101116 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 11:02:25.450109  101116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 11:02:25.467518  101116 docker.go:218] disabling cri-docker service (if available) ...
	I1101 11:02:25.467594  101116 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 11:02:25.485349  101116 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 11:02:25.502168  101116 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 11:02:25.649781  101116 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 11:02:25.883836  101116 docker.go:234] disabling docker service ...
	I1101 11:02:25.883939  101116 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 11:02:25.900743  101116 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 11:02:25.916289  101116 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 11:02:26.075807  101116 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 11:02:26.222827  101116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 11:02:26.239919  101116 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 11:02:26.263801  101116 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1101 11:02:26.263880  101116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:02:26.276633  101116 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 11:02:26.276757  101116 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:02:26.289883  101116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:02:26.302983  101116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:02:26.316172  101116 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 11:02:26.330716  101116 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:02:26.344254  101116 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:02:26.366468  101116 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:02:26.380129  101116 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 11:02:26.391461  101116 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 11:02:26.391522  101116 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 11:02:26.413479  101116 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 11:02:26.426274  101116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:02:26.568137  101116 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 11:02:26.684543  101116 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 11:02:26.684632  101116 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 11:02:26.690443  101116 start.go:564] Will wait 60s for crictl version
	I1101 11:02:26.690507  101116 ssh_runner.go:195] Run: which crictl
	I1101 11:02:26.694749  101116 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 11:02:26.741286  101116 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1101 11:02:26.741371  101116 ssh_runner.go:195] Run: crio --version
	I1101 11:02:26.771216  101116 ssh_runner.go:195] Run: crio --version
	I1101 11:02:26.804098  101116 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1101 11:02:26.807933  101116 main.go:143] libmachine: domain test-preload-401855 has defined MAC address 52:54:00:8e:17:26 in network mk-test-preload-401855
	I1101 11:02:26.808303  101116 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8e:17:26", ip: ""} in network mk-test-preload-401855: {Iface:virbr1 ExpiryTime:2025-11-01 12:02:19 +0000 UTC Type:0 Mac:52:54:00:8e:17:26 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-401855 Clientid:01:52:54:00:8e:17:26}
	I1101 11:02:26.808322  101116 main.go:143] libmachine: domain test-preload-401855 has defined IP address 192.168.39.101 and MAC address 52:54:00:8e:17:26 in network mk-test-preload-401855
	I1101 11:02:26.808526  101116 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1101 11:02:26.813393  101116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 11:02:26.829327  101116 kubeadm.go:884] updating cluster {Name:test-preload-401855 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-401855 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.101 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 11:02:26.829465  101116 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1101 11:02:26.829523  101116 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 11:02:26.870265  101116 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1101 11:02:26.870343  101116 ssh_runner.go:195] Run: which lz4
	I1101 11:02:26.875114  101116 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1101 11:02:26.880254  101116 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 11:02:26.880291  101116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1101 11:02:28.479245  101116 crio.go:462] duration metric: took 1.604163558s to copy over tarball
	I1101 11:02:28.479336  101116 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 11:02:30.244772  101116 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.765400818s)
	I1101 11:02:30.244802  101116 crio.go:469] duration metric: took 1.765521804s to extract the tarball
	I1101 11:02:30.244810  101116 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 11:02:30.285973  101116 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 11:02:30.330080  101116 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 11:02:30.330104  101116 cache_images.go:86] Images are preloaded, skipping loading
	I1101 11:02:30.330112  101116 kubeadm.go:935] updating node { 192.168.39.101 8443 v1.32.0 crio true true} ...
	I1101 11:02:30.330205  101116 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-401855 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.101
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-401855 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 11:02:30.330271  101116 ssh_runner.go:195] Run: crio config
	I1101 11:02:30.377986  101116 cni.go:84] Creating CNI manager for ""
	I1101 11:02:30.378013  101116 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 11:02:30.378034  101116 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 11:02:30.378084  101116 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.101 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-401855 NodeName:test-preload-401855 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.101"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.101 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 11:02:30.378230  101116 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.101
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-401855"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.101"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.101"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 11:02:30.378314  101116 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1101 11:02:30.391117  101116 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 11:02:30.391192  101116 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 11:02:30.403513  101116 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1101 11:02:30.424918  101116 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 11:02:30.445729  101116 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1101 11:02:30.467486  101116 ssh_runner.go:195] Run: grep 192.168.39.101	control-plane.minikube.internal$ /etc/hosts
	I1101 11:02:30.471793  101116 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.101	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 11:02:30.487182  101116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:02:30.638607  101116 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:02:30.674636  101116 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/test-preload-401855 for IP: 192.168.39.101
	I1101 11:02:30.674664  101116 certs.go:195] generating shared ca certs ...
	I1101 11:02:30.674681  101116 certs.go:227] acquiring lock for ca certs: {Name:mk20731b316fbc22c351241cefc40924880eeba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:02:30.674864  101116 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-70113/.minikube/ca.key
	I1101 11:02:30.674929  101116 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.key
	I1101 11:02:30.674940  101116 certs.go:257] generating profile certs ...
	I1101 11:02:30.675018  101116 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/test-preload-401855/client.key
	I1101 11:02:30.675091  101116 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/test-preload-401855/apiserver.key.624140f9
	I1101 11:02:30.675127  101116 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/test-preload-401855/proxy-client.key
	I1101 11:02:30.675230  101116 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/73998.pem (1338 bytes)
	W1101 11:02:30.675258  101116 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-70113/.minikube/certs/73998_empty.pem, impossibly tiny 0 bytes
	I1101 11:02:30.675268  101116 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 11:02:30.675288  101116 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem (1082 bytes)
	I1101 11:02:30.675310  101116 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/cert.pem (1123 bytes)
	I1101 11:02:30.675330  101116 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/key.pem (1675 bytes)
	I1101 11:02:30.675379  101116 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/files/etc/ssl/certs/739982.pem (1708 bytes)
	I1101 11:02:30.675917  101116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 11:02:30.710720  101116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 11:02:30.753445  101116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 11:02:30.783055  101116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 11:02:30.812623  101116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/test-preload-401855/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1101 11:02:30.842193  101116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/test-preload-401855/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 11:02:30.872061  101116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/test-preload-401855/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 11:02:30.901742  101116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/test-preload-401855/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 11:02:30.931138  101116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 11:02:30.960443  101116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/certs/73998.pem --> /usr/share/ca-certificates/73998.pem (1338 bytes)
	I1101 11:02:30.990627  101116 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/files/etc/ssl/certs/739982.pem --> /usr/share/ca-certificates/739982.pem (1708 bytes)
	I1101 11:02:31.021058  101116 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 11:02:31.043083  101116 ssh_runner.go:195] Run: openssl version
	I1101 11:02:31.050007  101116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/73998.pem && ln -fs /usr/share/ca-certificates/73998.pem /etc/ssl/certs/73998.pem"
	I1101 11:02:31.063807  101116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/73998.pem
	I1101 11:02:31.069312  101116 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:03 /usr/share/ca-certificates/73998.pem
	I1101 11:02:31.069374  101116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/73998.pem
	I1101 11:02:31.077127  101116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/73998.pem /etc/ssl/certs/51391683.0"
	I1101 11:02:31.090928  101116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/739982.pem && ln -fs /usr/share/ca-certificates/739982.pem /etc/ssl/certs/739982.pem"
	I1101 11:02:31.104273  101116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/739982.pem
	I1101 11:02:31.109616  101116 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:03 /usr/share/ca-certificates/739982.pem
	I1101 11:02:31.109669  101116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/739982.pem
	I1101 11:02:31.116983  101116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/739982.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 11:02:31.130005  101116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 11:02:31.143235  101116 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:02:31.148832  101116 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:50 /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:02:31.148885  101116 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:02:31.156312  101116 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 11:02:31.170232  101116 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 11:02:31.175977  101116 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 11:02:31.183924  101116 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 11:02:31.191603  101116 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 11:02:31.199156  101116 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 11:02:31.206642  101116 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 11:02:31.214237  101116 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 11:02:31.221902  101116 kubeadm.go:401] StartCluster: {Name:test-preload-401855 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-401855 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.101 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:02:31.221977  101116 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 11:02:31.222023  101116 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 11:02:31.260765  101116 cri.go:89] found id: ""
	I1101 11:02:31.260867  101116 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 11:02:31.273609  101116 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 11:02:31.273631  101116 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 11:02:31.273685  101116 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 11:02:31.285409  101116 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 11:02:31.285808  101116 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-401855" does not appear in /home/jenkins/minikube-integration/21830-70113/kubeconfig
	I1101 11:02:31.285948  101116 kubeconfig.go:62] /home/jenkins/minikube-integration/21830-70113/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-401855" cluster setting kubeconfig missing "test-preload-401855" context setting]
	I1101 11:02:31.286215  101116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/kubeconfig: {Name:mk1fa0677a0758214359bfdd6f326495ee5fd60e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:02:31.286728  101116 kapi.go:59] client config for test-preload-401855: &rest.Config{Host:"https://192.168.39.101:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21830-70113/.minikube/profiles/test-preload-401855/client.crt", KeyFile:"/home/jenkins/minikube-integration/21830-70113/.minikube/profiles/test-preload-401855/client.key", CAFile:"/home/jenkins/minikube-integration/21830-70113/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 11:02:31.287187  101116 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1101 11:02:31.287207  101116 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1101 11:02:31.287215  101116 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1101 11:02:31.287220  101116 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1101 11:02:31.287225  101116 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1101 11:02:31.287556  101116 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 11:02:31.298466  101116 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.101
	I1101 11:02:31.298491  101116 kubeadm.go:1161] stopping kube-system containers ...
	I1101 11:02:31.298503  101116 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1101 11:02:31.298575  101116 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 11:02:31.338835  101116 cri.go:89] found id: ""
	I1101 11:02:31.338924  101116 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 11:02:31.363249  101116 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 11:02:31.376348  101116 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 11:02:31.376370  101116 kubeadm.go:158] found existing configuration files:
	
	I1101 11:02:31.376432  101116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 11:02:31.388387  101116 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 11:02:31.388456  101116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 11:02:31.401163  101116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 11:02:31.412910  101116 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 11:02:31.412973  101116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 11:02:31.426280  101116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 11:02:31.438132  101116 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 11:02:31.438200  101116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 11:02:31.450379  101116 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 11:02:31.461349  101116 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 11:02:31.461408  101116 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 11:02:31.473803  101116 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 11:02:31.485873  101116 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:02:31.543381  101116 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:02:32.217748  101116 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:02:32.498915  101116 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:02:32.583893  101116 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:02:32.650213  101116 api_server.go:52] waiting for apiserver process to appear ...
	I1101 11:02:32.650288  101116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:02:33.151364  101116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:02:33.650561  101116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:02:34.151251  101116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:02:34.651175  101116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:02:35.150601  101116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:02:35.181598  101116 api_server.go:72] duration metric: took 2.531398102s to wait for apiserver process to appear ...
	I1101 11:02:35.181632  101116 api_server.go:88] waiting for apiserver healthz status ...
	I1101 11:02:35.181657  101116 api_server.go:253] Checking apiserver healthz at https://192.168.39.101:8443/healthz ...
	I1101 11:02:35.182203  101116 api_server.go:269] stopped: https://192.168.39.101:8443/healthz: Get "https://192.168.39.101:8443/healthz": dial tcp 192.168.39.101:8443: connect: connection refused
	I1101 11:02:35.682427  101116 api_server.go:253] Checking apiserver healthz at https://192.168.39.101:8443/healthz ...
	I1101 11:02:38.080175  101116 api_server.go:279] https://192.168.39.101:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 11:02:38.080206  101116 api_server.go:103] status: https://192.168.39.101:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 11:02:38.080224  101116 api_server.go:253] Checking apiserver healthz at https://192.168.39.101:8443/healthz ...
	I1101 11:02:38.100367  101116 api_server.go:279] https://192.168.39.101:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 11:02:38.100402  101116 api_server.go:103] status: https://192.168.39.101:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 11:02:38.182796  101116 api_server.go:253] Checking apiserver healthz at https://192.168.39.101:8443/healthz ...
	I1101 11:02:38.194333  101116 api_server.go:279] https://192.168.39.101:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:02:38.194364  101116 api_server.go:103] status: https://192.168.39.101:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:02:38.681807  101116 api_server.go:253] Checking apiserver healthz at https://192.168.39.101:8443/healthz ...
	I1101 11:02:38.686454  101116 api_server.go:279] https://192.168.39.101:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:02:38.686486  101116 api_server.go:103] status: https://192.168.39.101:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:02:39.181980  101116 api_server.go:253] Checking apiserver healthz at https://192.168.39.101:8443/healthz ...
	I1101 11:02:39.187614  101116 api_server.go:279] https://192.168.39.101:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:02:39.187649  101116 api_server.go:103] status: https://192.168.39.101:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:02:39.682485  101116 api_server.go:253] Checking apiserver healthz at https://192.168.39.101:8443/healthz ...
	I1101 11:02:39.687089  101116 api_server.go:279] https://192.168.39.101:8443/healthz returned 200:
	ok
	I1101 11:02:39.695989  101116 api_server.go:141] control plane version: v1.32.0
	I1101 11:02:39.696020  101116 api_server.go:131] duration metric: took 4.514380954s to wait for apiserver health ...
	I1101 11:02:39.696030  101116 cni.go:84] Creating CNI manager for ""
	I1101 11:02:39.696037  101116 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 11:02:39.698238  101116 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 11:02:39.699602  101116 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 11:02:39.725924  101116 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1101 11:02:39.750502  101116 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 11:02:39.758493  101116 system_pods.go:59] 7 kube-system pods found
	I1101 11:02:39.758543  101116 system_pods.go:61] "coredns-668d6bf9bc-jxpvh" [abfa6cb4-f481-49c3-8d4f-c2965695baa6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 11:02:39.758554  101116 system_pods.go:61] "etcd-test-preload-401855" [de5e05cf-ed67-4456-a89d-e64a6bbecf4a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 11:02:39.758567  101116 system_pods.go:61] "kube-apiserver-test-preload-401855" [4e008ca3-acf9-4b5d-a3fd-c91110f18e69] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 11:02:39.758577  101116 system_pods.go:61] "kube-controller-manager-test-preload-401855" [d9f857fd-9cae-4e6c-ab11-6fccc40e0932] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 11:02:39.758586  101116 system_pods.go:61] "kube-proxy-b7qr6" [137a4f2f-9daa-4265-96db-4938d2459f31] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 11:02:39.758596  101116 system_pods.go:61] "kube-scheduler-test-preload-401855" [10a82e5f-c8dc-4f3e-885e-ee3848cd035a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 11:02:39.758610  101116 system_pods.go:61] "storage-provisioner" [77d8393c-13a2-4420-a857-b69650273b40] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 11:02:39.758623  101116 system_pods.go:74] duration metric: took 8.093568ms to wait for pod list to return data ...
	I1101 11:02:39.758632  101116 node_conditions.go:102] verifying NodePressure condition ...
	I1101 11:02:39.769089  101116 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 11:02:39.769127  101116 node_conditions.go:123] node cpu capacity is 2
	I1101 11:02:39.769140  101116 node_conditions.go:105] duration metric: took 10.503065ms to run NodePressure ...
	I1101 11:02:39.769189  101116 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:02:40.091255  101116 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1101 11:02:40.097589  101116 kubeadm.go:744] kubelet initialised
	I1101 11:02:40.097607  101116 kubeadm.go:745] duration metric: took 6.327993ms waiting for restarted kubelet to initialise ...
	I1101 11:02:40.097624  101116 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 11:02:40.116385  101116 ops.go:34] apiserver oom_adj: -16
	I1101 11:02:40.116409  101116 kubeadm.go:602] duration metric: took 8.842771638s to restartPrimaryControlPlane
	I1101 11:02:40.116418  101116 kubeadm.go:403] duration metric: took 8.894525163s to StartCluster
	I1101 11:02:40.116442  101116 settings.go:142] acquiring lock: {Name:mk26e3d3b2448df59827bb1be60cde1d117dbc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:02:40.116513  101116 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-70113/kubeconfig
	I1101 11:02:40.117074  101116 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/kubeconfig: {Name:mk1fa0677a0758214359bfdd6f326495ee5fd60e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:02:40.117310  101116 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.101 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 11:02:40.117377  101116 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 11:02:40.117489  101116 addons.go:70] Setting storage-provisioner=true in profile "test-preload-401855"
	I1101 11:02:40.117509  101116 addons.go:239] Setting addon storage-provisioner=true in "test-preload-401855"
	W1101 11:02:40.117558  101116 addons.go:248] addon storage-provisioner should already be in state true
	I1101 11:02:40.117578  101116 config.go:182] Loaded profile config "test-preload-401855": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1101 11:02:40.117601  101116 host.go:66] Checking if "test-preload-401855" exists ...
	I1101 11:02:40.117525  101116 addons.go:70] Setting default-storageclass=true in profile "test-preload-401855"
	I1101 11:02:40.117627  101116 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-401855"
	I1101 11:02:40.119205  101116 out.go:179] * Verifying Kubernetes components...
	I1101 11:02:40.120067  101116 kapi.go:59] client config for test-preload-401855: &rest.Config{Host:"https://192.168.39.101:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21830-70113/.minikube/profiles/test-preload-401855/client.crt", KeyFile:"/home/jenkins/minikube-integration/21830-70113/.minikube/profiles/test-preload-401855/client.key", CAFile:"/home/jenkins/minikube-integration/21830-70113/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 11:02:40.120330  101116 addons.go:239] Setting addon default-storageclass=true in "test-preload-401855"
	W1101 11:02:40.120345  101116 addons.go:248] addon default-storageclass should already be in state true
	I1101 11:02:40.120372  101116 host.go:66] Checking if "test-preload-401855" exists ...
	I1101 11:02:40.121819  101116 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 11:02:40.121842  101116 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 11:02:40.123043  101116 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 11:02:40.123080  101116 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:02:40.124223  101116 main.go:143] libmachine: domain test-preload-401855 has defined MAC address 52:54:00:8e:17:26 in network mk-test-preload-401855
	I1101 11:02:40.124577  101116 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 11:02:40.124598  101116 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 11:02:40.124664  101116 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8e:17:26", ip: ""} in network mk-test-preload-401855: {Iface:virbr1 ExpiryTime:2025-11-01 12:02:19 +0000 UTC Type:0 Mac:52:54:00:8e:17:26 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-401855 Clientid:01:52:54:00:8e:17:26}
	I1101 11:02:40.124697  101116 main.go:143] libmachine: domain test-preload-401855 has defined IP address 192.168.39.101 and MAC address 52:54:00:8e:17:26 in network mk-test-preload-401855
	I1101 11:02:40.124949  101116 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/test-preload-401855/id_rsa Username:docker}
	I1101 11:02:40.127490  101116 main.go:143] libmachine: domain test-preload-401855 has defined MAC address 52:54:00:8e:17:26 in network mk-test-preload-401855
	I1101 11:02:40.127924  101116 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8e:17:26", ip: ""} in network mk-test-preload-401855: {Iface:virbr1 ExpiryTime:2025-11-01 12:02:19 +0000 UTC Type:0 Mac:52:54:00:8e:17:26 Iaid: IPaddr:192.168.39.101 Prefix:24 Hostname:test-preload-401855 Clientid:01:52:54:00:8e:17:26}
	I1101 11:02:40.127963  101116 main.go:143] libmachine: domain test-preload-401855 has defined IP address 192.168.39.101 and MAC address 52:54:00:8e:17:26 in network mk-test-preload-401855
	I1101 11:02:40.128132  101116 sshutil.go:53] new ssh client: &{IP:192.168.39.101 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/test-preload-401855/id_rsa Username:docker}
	I1101 11:02:40.382193  101116 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:02:40.406083  101116 node_ready.go:35] waiting up to 6m0s for node "test-preload-401855" to be "Ready" ...
	I1101 11:02:40.409803  101116 node_ready.go:49] node "test-preload-401855" is "Ready"
	I1101 11:02:40.409838  101116 node_ready.go:38] duration metric: took 3.710284ms for node "test-preload-401855" to be "Ready" ...
	I1101 11:02:40.409853  101116 api_server.go:52] waiting for apiserver process to appear ...
	I1101 11:02:40.409897  101116 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:02:40.428938  101116 api_server.go:72] duration metric: took 311.597934ms to wait for apiserver process to appear ...
	I1101 11:02:40.428969  101116 api_server.go:88] waiting for apiserver healthz status ...
	I1101 11:02:40.428990  101116 api_server.go:253] Checking apiserver healthz at https://192.168.39.101:8443/healthz ...
	I1101 11:02:40.436002  101116 api_server.go:279] https://192.168.39.101:8443/healthz returned 200:
	ok
	I1101 11:02:40.437237  101116 api_server.go:141] control plane version: v1.32.0
	I1101 11:02:40.437268  101116 api_server.go:131] duration metric: took 8.291821ms to wait for apiserver health ...
	I1101 11:02:40.437280  101116 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 11:02:40.440743  101116 system_pods.go:59] 7 kube-system pods found
	I1101 11:02:40.440783  101116 system_pods.go:61] "coredns-668d6bf9bc-jxpvh" [abfa6cb4-f481-49c3-8d4f-c2965695baa6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 11:02:40.440808  101116 system_pods.go:61] "etcd-test-preload-401855" [de5e05cf-ed67-4456-a89d-e64a6bbecf4a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 11:02:40.440826  101116 system_pods.go:61] "kube-apiserver-test-preload-401855" [4e008ca3-acf9-4b5d-a3fd-c91110f18e69] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 11:02:40.440836  101116 system_pods.go:61] "kube-controller-manager-test-preload-401855" [d9f857fd-9cae-4e6c-ab11-6fccc40e0932] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 11:02:40.440844  101116 system_pods.go:61] "kube-proxy-b7qr6" [137a4f2f-9daa-4265-96db-4938d2459f31] Running
	I1101 11:02:40.440854  101116 system_pods.go:61] "kube-scheduler-test-preload-401855" [10a82e5f-c8dc-4f3e-885e-ee3848cd035a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 11:02:40.440861  101116 system_pods.go:61] "storage-provisioner" [77d8393c-13a2-4420-a857-b69650273b40] Running
	I1101 11:02:40.440870  101116 system_pods.go:74] duration metric: took 3.583155ms to wait for pod list to return data ...
	I1101 11:02:40.440885  101116 default_sa.go:34] waiting for default service account to be created ...
	I1101 11:02:40.443520  101116 default_sa.go:45] found service account: "default"
	I1101 11:02:40.443548  101116 default_sa.go:55] duration metric: took 2.654806ms for default service account to be created ...
	I1101 11:02:40.443557  101116 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 11:02:40.445993  101116 system_pods.go:86] 7 kube-system pods found
	I1101 11:02:40.446018  101116 system_pods.go:89] "coredns-668d6bf9bc-jxpvh" [abfa6cb4-f481-49c3-8d4f-c2965695baa6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 11:02:40.446025  101116 system_pods.go:89] "etcd-test-preload-401855" [de5e05cf-ed67-4456-a89d-e64a6bbecf4a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 11:02:40.446033  101116 system_pods.go:89] "kube-apiserver-test-preload-401855" [4e008ca3-acf9-4b5d-a3fd-c91110f18e69] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 11:02:40.446051  101116 system_pods.go:89] "kube-controller-manager-test-preload-401855" [d9f857fd-9cae-4e6c-ab11-6fccc40e0932] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 11:02:40.446062  101116 system_pods.go:89] "kube-proxy-b7qr6" [137a4f2f-9daa-4265-96db-4938d2459f31] Running
	I1101 11:02:40.446070  101116 system_pods.go:89] "kube-scheduler-test-preload-401855" [10a82e5f-c8dc-4f3e-885e-ee3848cd035a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 11:02:40.446075  101116 system_pods.go:89] "storage-provisioner" [77d8393c-13a2-4420-a857-b69650273b40] Running
	I1101 11:02:40.446084  101116 system_pods.go:126] duration metric: took 2.519083ms to wait for k8s-apps to be running ...
	I1101 11:02:40.446097  101116 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 11:02:40.446139  101116 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:02:40.463609  101116 system_svc.go:56] duration metric: took 17.500427ms WaitForService to wait for kubelet
	I1101 11:02:40.463647  101116 kubeadm.go:587] duration metric: took 346.308599ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 11:02:40.463695  101116 node_conditions.go:102] verifying NodePressure condition ...
	I1101 11:02:40.466687  101116 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 11:02:40.466714  101116 node_conditions.go:123] node cpu capacity is 2
	I1101 11:02:40.466727  101116 node_conditions.go:105] duration metric: took 3.02463ms to run NodePressure ...
	I1101 11:02:40.466741  101116 start.go:242] waiting for startup goroutines ...
	I1101 11:02:40.608737  101116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 11:02:40.609060  101116 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 11:02:41.371014  101116 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 11:02:41.372451  101116 addons.go:515] duration metric: took 1.255074607s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 11:02:41.372488  101116 start.go:247] waiting for cluster config update ...
	I1101 11:02:41.372499  101116 start.go:256] writing updated cluster config ...
	I1101 11:02:41.372794  101116 ssh_runner.go:195] Run: rm -f paused
	I1101 11:02:41.381177  101116 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 11:02:41.381627  101116 kapi.go:59] client config for test-preload-401855: &rest.Config{Host:"https://192.168.39.101:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21830-70113/.minikube/profiles/test-preload-401855/client.crt", KeyFile:"/home/jenkins/minikube-integration/21830-70113/.minikube/profiles/test-preload-401855/client.key", CAFile:"/home/jenkins/minikube-integration/21830-70113/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 11:02:41.390178  101116 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-jxpvh" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 11:02:43.396133  101116 pod_ready.go:104] pod "coredns-668d6bf9bc-jxpvh" is not "Ready", error: <nil>
	W1101 11:02:45.396730  101116 pod_ready.go:104] pod "coredns-668d6bf9bc-jxpvh" is not "Ready", error: <nil>
	W1101 11:02:47.397001  101116 pod_ready.go:104] pod "coredns-668d6bf9bc-jxpvh" is not "Ready", error: <nil>
	I1101 11:02:47.896420  101116 pod_ready.go:94] pod "coredns-668d6bf9bc-jxpvh" is "Ready"
	I1101 11:02:47.896450  101116 pod_ready.go:86] duration metric: took 6.506246602s for pod "coredns-668d6bf9bc-jxpvh" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:02:47.899317  101116 pod_ready.go:83] waiting for pod "etcd-test-preload-401855" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:02:47.904443  101116 pod_ready.go:94] pod "etcd-test-preload-401855" is "Ready"
	I1101 11:02:47.904468  101116 pod_ready.go:86] duration metric: took 5.127066ms for pod "etcd-test-preload-401855" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:02:47.907286  101116 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-401855" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 11:02:49.913617  101116 pod_ready.go:104] pod "kube-apiserver-test-preload-401855" is not "Ready", error: <nil>
	W1101 11:02:51.913941  101116 pod_ready.go:104] pod "kube-apiserver-test-preload-401855" is not "Ready", error: <nil>
	I1101 11:02:53.414153  101116 pod_ready.go:94] pod "kube-apiserver-test-preload-401855" is "Ready"
	I1101 11:02:53.414194  101116 pod_ready.go:86] duration metric: took 5.50688736s for pod "kube-apiserver-test-preload-401855" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:02:53.417188  101116 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-401855" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:02:53.422397  101116 pod_ready.go:94] pod "kube-controller-manager-test-preload-401855" is "Ready"
	I1101 11:02:53.422429  101116 pod_ready.go:86] duration metric: took 5.208855ms for pod "kube-controller-manager-test-preload-401855" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:02:53.425361  101116 pod_ready.go:83] waiting for pod "kube-proxy-b7qr6" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:02:53.429950  101116 pod_ready.go:94] pod "kube-proxy-b7qr6" is "Ready"
	I1101 11:02:53.429979  101116 pod_ready.go:86] duration metric: took 4.590021ms for pod "kube-proxy-b7qr6" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:02:53.432109  101116 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-401855" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:02:54.494258  101116 pod_ready.go:94] pod "kube-scheduler-test-preload-401855" is "Ready"
	I1101 11:02:54.494289  101116 pod_ready.go:86] duration metric: took 1.06215487s for pod "kube-scheduler-test-preload-401855" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:02:54.494300  101116 pod_ready.go:40] duration metric: took 13.113096534s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 11:02:54.536557  101116 start.go:628] kubectl: 1.34.1, cluster: 1.32.0 (minor skew: 2)
	I1101 11:02:54.538280  101116 out.go:203] 
	W1101 11:02:54.539554  101116 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.32.0.
	I1101 11:02:54.540947  101116 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1101 11:02:54.542323  101116 out.go:179] * Done! kubectl is now configured to use "test-preload-401855" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 11:02:55 test-preload-401855 crio[844]: time="2025-11-01 11:02:55.351383285Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761994975351361796,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d196833a-cb78-47b0-b2fd-21f99150ab6d name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 11:02:55 test-preload-401855 crio[844]: time="2025-11-01 11:02:55.352265905Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=69320ce3-e95d-4e78-968b-d45f0c301690 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:02:55 test-preload-401855 crio[844]: time="2025-11-01 11:02:55.352417190Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=69320ce3-e95d-4e78-968b-d45f0c301690 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:02:55 test-preload-401855 crio[844]: time="2025-11-01 11:02:55.352766260Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e68a15cac32f014c61236cc03ba3bd8bce3d486cc521c24ab7935ce9f307fa02,PodSandboxId:bdca8d59f90a3ba5c1ddc159f99f0c0d64183aff51c26805caea8ce7ca90fba1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761994962708784938,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-jxpvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abfa6cb4-f481-49c3-8d4f-c2965695baa6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60671645e8488dd237a816da3982aea6b247c0ebd1ff70ba59e0dccf7ee63376,PodSandboxId:aa341423b90b2a23d27dd95f0788b48811ce885beaf71b3bdc0fa4fcdf8b585d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1761994959080742296,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b7qr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 137a4f2f-9daa-4265-96db-4938d2459f31,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad4f4d9f41bd08c5c5216bbd55f1845efb8aba6eb93ef89a829e601a0d873eb9,PodSandboxId:0726d3b2812e0d98f6b32ce23d8c15c98f17c169a816792c9ffb9d8bac20351f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761994959084941333,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77
d8393c-13a2-4420-a857-b69650273b40,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39b390f539de367609e233760ef14beaa06ce5277a074f24797fbab831a8a7ff,PodSandboxId:2efacf1a9cfccd53559a7dca11bd1bcb8a0a65cb89060502556820a0b8b7f756,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1761994954803822576,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-401855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45499bdc629aceb25e9118fe098d7dd0,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d20a9c4c60b8661704f741dcc0dee91c4af429c6e40095098bf71f2e8286066,PodSandboxId:a3ea2c78010f3f8403ff12a6e6e029a54b4131a5f92b071eee8371a41240e2df,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1761994954819784797,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-401855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 183b16442918e89957fe19fcec8b8477,},Annotations:map
[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c69eb3a26b057c9da7de11ac847c82b6b222c72778d966a0f94f3cab6ba59fc,PodSandboxId:23667b7c46e39f5ab79077e6c5c7066e1d28c602213e376151828829dfe1e1c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1761994954768464002,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-401855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d20183d575ef28969a0921cd6e17254,},Annotations:map[string]str
ing{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b0761b552b578e351d1a26afa96282aea6c023accf74aaebc49e96485ed5b2e,PodSandboxId:ae399989326d6156bfc10f83066908a9aab9b1058c3324e1a8536ca50d99147f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1761994954769456527,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-401855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8074f744c0e707f39c799e369b1c60c,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=69320ce3-e95d-4e78-968b-d45f0c301690 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:02:55 test-preload-401855 crio[844]: time="2025-11-01 11:02:55.394853778Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9bde2032-90a1-411b-9f3e-6d41d6143511 name=/runtime.v1.RuntimeService/Version
	Nov 01 11:02:55 test-preload-401855 crio[844]: time="2025-11-01 11:02:55.394930995Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9bde2032-90a1-411b-9f3e-6d41d6143511 name=/runtime.v1.RuntimeService/Version
	Nov 01 11:02:55 test-preload-401855 crio[844]: time="2025-11-01 11:02:55.396113070Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ca6b3d06-d0ba-4f91-8f55-5d6dc2bf8124 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 11:02:55 test-preload-401855 crio[844]: time="2025-11-01 11:02:55.396572717Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761994975396550621,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ca6b3d06-d0ba-4f91-8f55-5d6dc2bf8124 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 11:02:55 test-preload-401855 crio[844]: time="2025-11-01 11:02:55.397184141Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9c9efa81-bea4-4aa6-9084-80d824cccbbd name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:02:55 test-preload-401855 crio[844]: time="2025-11-01 11:02:55.397238163Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9c9efa81-bea4-4aa6-9084-80d824cccbbd name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:02:55 test-preload-401855 crio[844]: time="2025-11-01 11:02:55.397396485Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e68a15cac32f014c61236cc03ba3bd8bce3d486cc521c24ab7935ce9f307fa02,PodSandboxId:bdca8d59f90a3ba5c1ddc159f99f0c0d64183aff51c26805caea8ce7ca90fba1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761994962708784938,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-jxpvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abfa6cb4-f481-49c3-8d4f-c2965695baa6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60671645e8488dd237a816da3982aea6b247c0ebd1ff70ba59e0dccf7ee63376,PodSandboxId:aa341423b90b2a23d27dd95f0788b48811ce885beaf71b3bdc0fa4fcdf8b585d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1761994959080742296,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b7qr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 137a4f2f-9daa-4265-96db-4938d2459f31,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad4f4d9f41bd08c5c5216bbd55f1845efb8aba6eb93ef89a829e601a0d873eb9,PodSandboxId:0726d3b2812e0d98f6b32ce23d8c15c98f17c169a816792c9ffb9d8bac20351f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761994959084941333,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77
d8393c-13a2-4420-a857-b69650273b40,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39b390f539de367609e233760ef14beaa06ce5277a074f24797fbab831a8a7ff,PodSandboxId:2efacf1a9cfccd53559a7dca11bd1bcb8a0a65cb89060502556820a0b8b7f756,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1761994954803822576,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-401855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45499bdc629aceb25e9118fe098d7dd0,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d20a9c4c60b8661704f741dcc0dee91c4af429c6e40095098bf71f2e8286066,PodSandboxId:a3ea2c78010f3f8403ff12a6e6e029a54b4131a5f92b071eee8371a41240e2df,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1761994954819784797,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-401855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 183b16442918e89957fe19fcec8b8477,},Annotations:map
[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c69eb3a26b057c9da7de11ac847c82b6b222c72778d966a0f94f3cab6ba59fc,PodSandboxId:23667b7c46e39f5ab79077e6c5c7066e1d28c602213e376151828829dfe1e1c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1761994954768464002,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-401855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d20183d575ef28969a0921cd6e17254,},Annotations:map[string]str
ing{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b0761b552b578e351d1a26afa96282aea6c023accf74aaebc49e96485ed5b2e,PodSandboxId:ae399989326d6156bfc10f83066908a9aab9b1058c3324e1a8536ca50d99147f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1761994954769456527,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-401855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8074f744c0e707f39c799e369b1c60c,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9c9efa81-bea4-4aa6-9084-80d824cccbbd name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:02:55 test-preload-401855 crio[844]: time="2025-11-01 11:02:55.439668003Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6a6a5467-5dde-43f5-9913-bddd9f048c71 name=/runtime.v1.RuntimeService/Version
	Nov 01 11:02:55 test-preload-401855 crio[844]: time="2025-11-01 11:02:55.439875000Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6a6a5467-5dde-43f5-9913-bddd9f048c71 name=/runtime.v1.RuntimeService/Version
	Nov 01 11:02:55 test-preload-401855 crio[844]: time="2025-11-01 11:02:55.441397554Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a89353df-e11e-4ffc-920d-bedfdbccd512 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 11:02:55 test-preload-401855 crio[844]: time="2025-11-01 11:02:55.442314792Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761994975442292203,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a89353df-e11e-4ffc-920d-bedfdbccd512 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 11:02:55 test-preload-401855 crio[844]: time="2025-11-01 11:02:55.443050078Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c2996071-830d-4306-8457-696f1e1f632e name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:02:55 test-preload-401855 crio[844]: time="2025-11-01 11:02:55.443249486Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c2996071-830d-4306-8457-696f1e1f632e name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:02:55 test-preload-401855 crio[844]: time="2025-11-01 11:02:55.443403558Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e68a15cac32f014c61236cc03ba3bd8bce3d486cc521c24ab7935ce9f307fa02,PodSandboxId:bdca8d59f90a3ba5c1ddc159f99f0c0d64183aff51c26805caea8ce7ca90fba1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761994962708784938,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-jxpvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abfa6cb4-f481-49c3-8d4f-c2965695baa6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60671645e8488dd237a816da3982aea6b247c0ebd1ff70ba59e0dccf7ee63376,PodSandboxId:aa341423b90b2a23d27dd95f0788b48811ce885beaf71b3bdc0fa4fcdf8b585d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1761994959080742296,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b7qr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 137a4f2f-9daa-4265-96db-4938d2459f31,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad4f4d9f41bd08c5c5216bbd55f1845efb8aba6eb93ef89a829e601a0d873eb9,PodSandboxId:0726d3b2812e0d98f6b32ce23d8c15c98f17c169a816792c9ffb9d8bac20351f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761994959084941333,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77
d8393c-13a2-4420-a857-b69650273b40,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39b390f539de367609e233760ef14beaa06ce5277a074f24797fbab831a8a7ff,PodSandboxId:2efacf1a9cfccd53559a7dca11bd1bcb8a0a65cb89060502556820a0b8b7f756,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1761994954803822576,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-401855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45499bdc629aceb25e9118fe098d7dd0,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d20a9c4c60b8661704f741dcc0dee91c4af429c6e40095098bf71f2e8286066,PodSandboxId:a3ea2c78010f3f8403ff12a6e6e029a54b4131a5f92b071eee8371a41240e2df,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1761994954819784797,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-401855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 183b16442918e89957fe19fcec8b8477,},Annotations:map
[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c69eb3a26b057c9da7de11ac847c82b6b222c72778d966a0f94f3cab6ba59fc,PodSandboxId:23667b7c46e39f5ab79077e6c5c7066e1d28c602213e376151828829dfe1e1c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1761994954768464002,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-401855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d20183d575ef28969a0921cd6e17254,},Annotations:map[string]str
ing{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b0761b552b578e351d1a26afa96282aea6c023accf74aaebc49e96485ed5b2e,PodSandboxId:ae399989326d6156bfc10f83066908a9aab9b1058c3324e1a8536ca50d99147f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1761994954769456527,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-401855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8074f744c0e707f39c799e369b1c60c,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c2996071-830d-4306-8457-696f1e1f632e name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:02:55 test-preload-401855 crio[844]: time="2025-11-01 11:02:55.478068345Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=feacabd6-84cd-4b6c-9a3c-74ecc9472b7c name=/runtime.v1.RuntimeService/Version
	Nov 01 11:02:55 test-preload-401855 crio[844]: time="2025-11-01 11:02:55.478185288Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=feacabd6-84cd-4b6c-9a3c-74ecc9472b7c name=/runtime.v1.RuntimeService/Version
	Nov 01 11:02:55 test-preload-401855 crio[844]: time="2025-11-01 11:02:55.481704716Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=10b69542-3398-4b5e-9854-93407ce4ba40 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 11:02:55 test-preload-401855 crio[844]: time="2025-11-01 11:02:55.482142422Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761994975482115644,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=10b69542-3398-4b5e-9854-93407ce4ba40 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 11:02:55 test-preload-401855 crio[844]: time="2025-11-01 11:02:55.483528429Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cdacf2c6-b961-4fa3-ae91-2eff3b91ebf4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:02:55 test-preload-401855 crio[844]: time="2025-11-01 11:02:55.483674988Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cdacf2c6-b961-4fa3-ae91-2eff3b91ebf4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:02:55 test-preload-401855 crio[844]: time="2025-11-01 11:02:55.483891745Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e68a15cac32f014c61236cc03ba3bd8bce3d486cc521c24ab7935ce9f307fa02,PodSandboxId:bdca8d59f90a3ba5c1ddc159f99f0c0d64183aff51c26805caea8ce7ca90fba1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761994962708784938,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-jxpvh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: abfa6cb4-f481-49c3-8d4f-c2965695baa6,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60671645e8488dd237a816da3982aea6b247c0ebd1ff70ba59e0dccf7ee63376,PodSandboxId:aa341423b90b2a23d27dd95f0788b48811ce885beaf71b3bdc0fa4fcdf8b585d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1761994959080742296,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b7qr6,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 137a4f2f-9daa-4265-96db-4938d2459f31,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad4f4d9f41bd08c5c5216bbd55f1845efb8aba6eb93ef89a829e601a0d873eb9,PodSandboxId:0726d3b2812e0d98f6b32ce23d8c15c98f17c169a816792c9ffb9d8bac20351f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761994959084941333,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77
d8393c-13a2-4420-a857-b69650273b40,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39b390f539de367609e233760ef14beaa06ce5277a074f24797fbab831a8a7ff,PodSandboxId:2efacf1a9cfccd53559a7dca11bd1bcb8a0a65cb89060502556820a0b8b7f756,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1761994954803822576,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-401855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45499bdc629aceb25e9118fe098d7dd0,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d20a9c4c60b8661704f741dcc0dee91c4af429c6e40095098bf71f2e8286066,PodSandboxId:a3ea2c78010f3f8403ff12a6e6e029a54b4131a5f92b071eee8371a41240e2df,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1761994954819784797,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-401855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 183b16442918e89957fe19fcec8b8477,},Annotations:map
[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c69eb3a26b057c9da7de11ac847c82b6b222c72778d966a0f94f3cab6ba59fc,PodSandboxId:23667b7c46e39f5ab79077e6c5c7066e1d28c602213e376151828829dfe1e1c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1761994954768464002,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-401855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d20183d575ef28969a0921cd6e17254,},Annotations:map[string]str
ing{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b0761b552b578e351d1a26afa96282aea6c023accf74aaebc49e96485ed5b2e,PodSandboxId:ae399989326d6156bfc10f83066908a9aab9b1058c3324e1a8536ca50d99147f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1761994954769456527,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-401855,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8074f744c0e707f39c799e369b1c60c,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cdacf2c6-b961-4fa3-ae91-2eff3b91ebf4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e68a15cac32f0       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   12 seconds ago      Running             coredns                   1                   bdca8d59f90a3       coredns-668d6bf9bc-jxpvh
	ad4f4d9f41bd0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 seconds ago      Running             storage-provisioner       1                   0726d3b2812e0       storage-provisioner
	60671645e8488       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   16 seconds ago      Running             kube-proxy                1                   aa341423b90b2       kube-proxy-b7qr6
	4d20a9c4c60b8       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   20 seconds ago      Running             kube-scheduler            1                   a3ea2c78010f3       kube-scheduler-test-preload-401855
	39b390f539de3       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   20 seconds ago      Running             etcd                      1                   2efacf1a9cfcc       etcd-test-preload-401855
	2b0761b552b57       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   20 seconds ago      Running             kube-controller-manager   1                   ae399989326d6       kube-controller-manager-test-preload-401855
	5c69eb3a26b05       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   20 seconds ago      Running             kube-apiserver            1                   23667b7c46e39       kube-apiserver-test-preload-401855
	
	
	==> coredns [e68a15cac32f014c61236cc03ba3bd8bce3d486cc521c24ab7935ce9f307fa02] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:55936 - 56176 "HINFO IN 1213660645703612307.6101291017023291533. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.065168232s
	
	
	==> describe nodes <==
	Name:               test-preload-401855
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-401855
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=test-preload-401855
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T11_01_12_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 11:01:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-401855
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 11:02:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 11:02:40 +0000   Sat, 01 Nov 2025 11:01:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 11:02:40 +0000   Sat, 01 Nov 2025 11:01:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 11:02:40 +0000   Sat, 01 Nov 2025 11:01:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 11:02:40 +0000   Sat, 01 Nov 2025 11:02:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.101
	  Hostname:    test-preload-401855
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 48a98845257d4857859edba177598460
	  System UUID:                48a98845-257d-4857-859e-dba177598460
	  Boot ID:                    7ee8854c-43fe-48d8-9357-2a85c0d6c0b0
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-jxpvh                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     99s
	  kube-system                 etcd-test-preload-401855                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         104s
	  kube-system                 kube-apiserver-test-preload-401855             250m (12%)    0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-controller-manager-test-preload-401855    200m (10%)    0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-proxy-b7qr6                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 kube-scheduler-test-preload-401855             100m (5%)     0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 97s                  kube-proxy       
	  Normal   Starting                 16s                  kube-proxy       
	  Normal   Starting                 110s                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  110s (x8 over 110s)  kubelet          Node test-preload-401855 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    110s (x8 over 110s)  kubelet          Node test-preload-401855 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     110s (x7 over 110s)  kubelet          Node test-preload-401855 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  110s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 104s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  104s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    103s                 kubelet          Node test-preload-401855 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     103s                 kubelet          Node test-preload-401855 status is now: NodeHasSufficientPID
	  Normal   NodeReady                103s                 kubelet          Node test-preload-401855 status is now: NodeReady
	  Normal   NodeHasSufficientMemory  103s                 kubelet          Node test-preload-401855 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           100s                 node-controller  Node test-preload-401855 event: Registered Node test-preload-401855 in Controller
	  Normal   Starting                 23s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  23s (x8 over 23s)    kubelet          Node test-preload-401855 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    23s (x8 over 23s)    kubelet          Node test-preload-401855 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     23s (x7 over 23s)    kubelet          Node test-preload-401855 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  23s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 17s                  kubelet          Node test-preload-401855 has been rebooted, boot id: 7ee8854c-43fe-48d8-9357-2a85c0d6c0b0
	  Normal   RegisteredNode           14s                  node-controller  Node test-preload-401855 event: Registered Node test-preload-401855 in Controller
	
	
	==> dmesg <==
	[Nov 1 11:02] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001474] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.008587] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.922809] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.087957] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.110666] kauditd_printk_skb: 102 callbacks suppressed
	[  +6.510505] kauditd_printk_skb: 177 callbacks suppressed
	[  +4.957181] kauditd_printk_skb: 197 callbacks suppressed
	
	
	==> etcd [39b390f539de367609e233760ef14beaa06ce5277a074f24797fbab831a8a7ff] <==
	{"level":"info","ts":"2025-11-01T11:02:35.292161Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-01T11:02:35.299649Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T11:02:35.299704Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T11:02:35.299714Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T11:02:35.309676Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-01T11:02:35.309974Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"65e271b8f7cb8d0f","initial-advertise-peer-urls":["https://192.168.39.101:2380"],"listen-peer-urls":["https://192.168.39.101:2380"],"advertise-client-urls":["https://192.168.39.101:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.101:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-01T11:02:35.310021Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-01T11:02:35.310134Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.101:2380"}
	{"level":"info","ts":"2025-11-01T11:02:35.310162Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.101:2380"}
	{"level":"info","ts":"2025-11-01T11:02:36.933742Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"65e271b8f7cb8d0f is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-01T11:02:36.933799Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"65e271b8f7cb8d0f became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-01T11:02:36.933819Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"65e271b8f7cb8d0f received MsgPreVoteResp from 65e271b8f7cb8d0f at term 2"}
	{"level":"info","ts":"2025-11-01T11:02:36.933835Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"65e271b8f7cb8d0f became candidate at term 3"}
	{"level":"info","ts":"2025-11-01T11:02:36.933840Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"65e271b8f7cb8d0f received MsgVoteResp from 65e271b8f7cb8d0f at term 3"}
	{"level":"info","ts":"2025-11-01T11:02:36.933848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"65e271b8f7cb8d0f became leader at term 3"}
	{"level":"info","ts":"2025-11-01T11:02:36.933857Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 65e271b8f7cb8d0f elected leader 65e271b8f7cb8d0f at term 3"}
	{"level":"info","ts":"2025-11-01T11:02:36.936772Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"65e271b8f7cb8d0f","local-member-attributes":"{Name:test-preload-401855 ClientURLs:[https://192.168.39.101:2379]}","request-path":"/0/members/65e271b8f7cb8d0f/attributes","cluster-id":"24cb6133d13a326a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-01T11:02:36.936913Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T11:02:36.937323Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T11:02:36.937440Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-01T11:02:36.937462Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-01T11:02:36.938096Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-01T11:02:36.938106Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-01T11:02:36.938905Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-01T11:02:36.938944Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.101:2379"}
	
	
	==> kernel <==
	 11:02:55 up 0 min,  0 users,  load average: 1.92, 0.56, 0.19
	Linux test-preload-401855 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Oct 28 16:58:05 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [5c69eb3a26b057c9da7de11ac847c82b6b222c72778d966a0f94f3cab6ba59fc] <==
	I1101 11:02:38.158530       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 11:02:38.158548       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 11:02:38.208658       1 shared_informer.go:320] Caches are synced for configmaps
	I1101 11:02:38.208729       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 11:02:38.208907       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 11:02:38.208747       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 11:02:38.209224       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1101 11:02:38.210275       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 11:02:38.212866       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 11:02:38.214903       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 11:02:38.219472       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1101 11:02:38.231110       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1101 11:02:38.231149       1 policy_source.go:240] refreshing policies
	I1101 11:02:38.257286       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 11:02:38.265660       1 cache.go:39] Caches are synced for autoregister controller
	I1101 11:02:38.675663       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1101 11:02:39.018151       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1101 11:02:39.421383       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.101]
	I1101 11:02:39.423000       1 controller.go:615] quota admission added evaluator for: endpoints
	I1101 11:02:39.930553       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1101 11:02:39.974418       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1101 11:02:40.002696       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 11:02:40.010402       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 11:02:41.523968       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1101 11:02:41.572494       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [2b0761b552b578e351d1a26afa96282aea6c023accf74aaebc49e96485ed5b2e] <==
	I1101 11:02:41.406915       1 shared_informer.go:320] Caches are synced for cronjob
	I1101 11:02:41.411231       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I1101 11:02:41.419685       1 shared_informer.go:320] Caches are synced for endpoint
	I1101 11:02:41.419723       1 shared_informer.go:320] Caches are synced for daemon sets
	I1101 11:02:41.421182       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I1101 11:02:41.421265       1 shared_informer.go:320] Caches are synced for taint
	I1101 11:02:41.421391       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 11:02:41.421474       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="test-preload-401855"
	I1101 11:02:41.421530       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 11:02:41.422125       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I1101 11:02:41.423734       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1101 11:02:41.424975       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I1101 11:02:41.426493       1 shared_informer.go:320] Caches are synced for service account
	I1101 11:02:41.430173       1 shared_informer.go:320] Caches are synced for namespace
	I1101 11:02:41.431354       1 shared_informer.go:320] Caches are synced for persistent volume
	I1101 11:02:41.434780       1 shared_informer.go:320] Caches are synced for resource quota
	I1101 11:02:41.447452       1 shared_informer.go:320] Caches are synced for garbage collector
	I1101 11:02:41.447485       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 11:02:41.447494       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 11:02:41.458028       1 shared_informer.go:320] Caches are synced for garbage collector
	I1101 11:02:41.531099       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="176.319739ms"
	I1101 11:02:41.532283       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="55.126µs"
	I1101 11:02:43.790863       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="75.54µs"
	I1101 11:02:47.470974       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="13.441998ms"
	I1101 11:02:47.471387       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="163.227µs"
	
	
	==> kube-proxy [60671645e8488dd237a816da3982aea6b247c0ebd1ff70ba59e0dccf7ee63376] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1101 11:02:39.504935       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1101 11:02:39.514676       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.101"]
	E1101 11:02:39.514798       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 11:02:39.554175       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1101 11:02:39.554271       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 11:02:39.554308       1 server_linux.go:170] "Using iptables Proxier"
	I1101 11:02:39.557238       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 11:02:39.557750       1 server.go:497] "Version info" version="v1.32.0"
	I1101 11:02:39.557981       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 11:02:39.559771       1 config.go:199] "Starting service config controller"
	I1101 11:02:39.559826       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1101 11:02:39.559867       1 config.go:105] "Starting endpoint slice config controller"
	I1101 11:02:39.559882       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1101 11:02:39.563327       1 config.go:329] "Starting node config controller"
	I1101 11:02:39.563372       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1101 11:02:39.660564       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1101 11:02:39.660644       1 shared_informer.go:320] Caches are synced for service config
	I1101 11:02:39.664270       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4d20a9c4c60b8661704f741dcc0dee91c4af429c6e40095098bf71f2e8286066] <==
	I1101 11:02:35.843314       1 serving.go:386] Generated self-signed cert in-memory
	W1101 11:02:38.071892       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 11:02:38.071946       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 11:02:38.071960       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 11:02:38.071970       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 11:02:38.135986       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1101 11:02:38.137628       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 11:02:38.141810       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 11:02:38.141934       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1101 11:02:38.143175       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1101 11:02:38.143289       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	W1101 11:02:38.154427       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	E1101 11:02:38.154510       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": RBAC: [role.rbac.authorization.k8s.io \"extension-apiserver-authentication-reader\" not found, role.rbac.authorization.k8s.io \"system::leader-locking-kube-scheduler\" not found]" logger="UnhandledError"
	I1101 11:02:39.443069       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 01 11:02:38 test-preload-401855 kubelet[1170]: E1101 11:02:38.294387    1170 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-test-preload-401855\" already exists" pod="kube-system/etcd-test-preload-401855"
	Nov 01 11:02:38 test-preload-401855 kubelet[1170]: I1101 11:02:38.327153    1170 kubelet_node_status.go:125] "Node was previously registered" node="test-preload-401855"
	Nov 01 11:02:38 test-preload-401855 kubelet[1170]: I1101 11:02:38.327267    1170 kubelet_node_status.go:79] "Successfully registered node" node="test-preload-401855"
	Nov 01 11:02:38 test-preload-401855 kubelet[1170]: I1101 11:02:38.327296    1170 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 01 11:02:38 test-preload-401855 kubelet[1170]: I1101 11:02:38.328424    1170 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 01 11:02:38 test-preload-401855 kubelet[1170]: I1101 11:02:38.329258    1170 setters.go:602] "Node became not ready" node="test-preload-401855" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-01T11:02:38Z","lastTransitionTime":"2025-11-01T11:02:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"}
	Nov 01 11:02:38 test-preload-401855 kubelet[1170]: I1101 11:02:38.605694    1170 apiserver.go:52] "Watching apiserver"
	Nov 01 11:02:38 test-preload-401855 kubelet[1170]: E1101 11:02:38.610890    1170 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-jxpvh" podUID="abfa6cb4-f481-49c3-8d4f-c2965695baa6"
	Nov 01 11:02:38 test-preload-401855 kubelet[1170]: I1101 11:02:38.631143    1170 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Nov 01 11:02:38 test-preload-401855 kubelet[1170]: I1101 11:02:38.669884    1170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/137a4f2f-9daa-4265-96db-4938d2459f31-xtables-lock\") pod \"kube-proxy-b7qr6\" (UID: \"137a4f2f-9daa-4265-96db-4938d2459f31\") " pod="kube-system/kube-proxy-b7qr6"
	Nov 01 11:02:38 test-preload-401855 kubelet[1170]: I1101 11:02:38.669934    1170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/137a4f2f-9daa-4265-96db-4938d2459f31-lib-modules\") pod \"kube-proxy-b7qr6\" (UID: \"137a4f2f-9daa-4265-96db-4938d2459f31\") " pod="kube-system/kube-proxy-b7qr6"
	Nov 01 11:02:38 test-preload-401855 kubelet[1170]: I1101 11:02:38.669953    1170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/77d8393c-13a2-4420-a857-b69650273b40-tmp\") pod \"storage-provisioner\" (UID: \"77d8393c-13a2-4420-a857-b69650273b40\") " pod="kube-system/storage-provisioner"
	Nov 01 11:02:38 test-preload-401855 kubelet[1170]: E1101 11:02:38.670503    1170 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 01 11:02:38 test-preload-401855 kubelet[1170]: E1101 11:02:38.670647    1170 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/abfa6cb4-f481-49c3-8d4f-c2965695baa6-config-volume podName:abfa6cb4-f481-49c3-8d4f-c2965695baa6 nodeName:}" failed. No retries permitted until 2025-11-01 11:02:39.170573751 +0000 UTC m=+6.700740146 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/abfa6cb4-f481-49c3-8d4f-c2965695baa6-config-volume") pod "coredns-668d6bf9bc-jxpvh" (UID: "abfa6cb4-f481-49c3-8d4f-c2965695baa6") : object "kube-system"/"coredns" not registered
	Nov 01 11:02:39 test-preload-401855 kubelet[1170]: E1101 11:02:39.172532    1170 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 01 11:02:39 test-preload-401855 kubelet[1170]: E1101 11:02:39.173774    1170 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/abfa6cb4-f481-49c3-8d4f-c2965695baa6-config-volume podName:abfa6cb4-f481-49c3-8d4f-c2965695baa6 nodeName:}" failed. No retries permitted until 2025-11-01 11:02:40.173754656 +0000 UTC m=+7.703921039 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/abfa6cb4-f481-49c3-8d4f-c2965695baa6-config-volume") pod "coredns-668d6bf9bc-jxpvh" (UID: "abfa6cb4-f481-49c3-8d4f-c2965695baa6") : object "kube-system"/"coredns" not registered
	Nov 01 11:02:40 test-preload-401855 kubelet[1170]: E1101 11:02:40.181782    1170 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 01 11:02:40 test-preload-401855 kubelet[1170]: E1101 11:02:40.181868    1170 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/abfa6cb4-f481-49c3-8d4f-c2965695baa6-config-volume podName:abfa6cb4-f481-49c3-8d4f-c2965695baa6 nodeName:}" failed. No retries permitted until 2025-11-01 11:02:42.181852651 +0000 UTC m=+9.712019046 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/abfa6cb4-f481-49c3-8d4f-c2965695baa6-config-volume") pod "coredns-668d6bf9bc-jxpvh" (UID: "abfa6cb4-f481-49c3-8d4f-c2965695baa6") : object "kube-system"/"coredns" not registered
	Nov 01 11:02:40 test-preload-401855 kubelet[1170]: I1101 11:02:40.228685    1170 kubelet_node_status.go:502] "Fast updating node status as it just became ready"
	Nov 01 11:02:42 test-preload-401855 kubelet[1170]: E1101 11:02:42.691163    1170 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761994962690426790,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 01 11:02:42 test-preload-401855 kubelet[1170]: E1101 11:02:42.691520    1170 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761994962690426790,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 01 11:02:44 test-preload-401855 kubelet[1170]: I1101 11:02:44.778048    1170 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 01 11:02:47 test-preload-401855 kubelet[1170]: I1101 11:02:47.436992    1170 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 01 11:02:52 test-preload-401855 kubelet[1170]: E1101 11:02:52.693372    1170 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761994972693005383,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 01 11:02:52 test-preload-401855 kubelet[1170]: E1101 11:02:52.693506    1170 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761994972693005383,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [ad4f4d9f41bd08c5c5216bbd55f1845efb8aba6eb93ef89a829e601a0d873eb9] <==
	I1101 11:02:39.350648       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-401855 -n test-preload-401855
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-401855 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-401855" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-401855
--- FAIL: TestPreload (158.07s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (67.76s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-112657 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-112657 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m2.572720635s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-112657] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21830-70113/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-70113/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-112657" primary control-plane node in "pause-112657" cluster
	* Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-112657" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 11:10:07.891795  108549 out.go:360] Setting OutFile to fd 1 ...
	I1101 11:10:07.892089  108549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:10:07.892101  108549 out.go:374] Setting ErrFile to fd 2...
	I1101 11:10:07.892105  108549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:10:07.892353  108549 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-70113/.minikube/bin
	I1101 11:10:07.892859  108549 out.go:368] Setting JSON to false
	I1101 11:10:07.893817  108549 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10356,"bootTime":1761985052,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 11:10:07.893909  108549 start.go:143] virtualization: kvm guest
	I1101 11:10:07.895907  108549 out.go:179] * [pause-112657] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 11:10:07.897057  108549 notify.go:221] Checking for updates...
	I1101 11:10:07.897086  108549 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 11:10:07.898392  108549 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 11:10:07.899761  108549 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-70113/kubeconfig
	I1101 11:10:07.901132  108549 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-70113/.minikube
	I1101 11:10:07.902476  108549 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 11:10:07.903527  108549 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 11:10:07.905287  108549 config.go:182] Loaded profile config "pause-112657": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:10:07.905916  108549 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 11:10:07.956669  108549 out.go:179] * Using the kvm2 driver based on existing profile
	I1101 11:10:07.958151  108549 start.go:309] selected driver: kvm2
	I1101 11:10:07.958175  108549 start.go:930] validating driver "kvm2" against &{Name:pause-112657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.1 ClusterName:pause-112657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.133 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:10:07.958375  108549 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 11:10:07.960006  108549 cni.go:84] Creating CNI manager for ""
	I1101 11:10:07.960086  108549 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 11:10:07.960170  108549 start.go:353] cluster config:
	{Name:pause-112657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-112657 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.133 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:10:07.960343  108549 iso.go:125] acquiring lock: {Name:mk49d9a272bb99d336f82dfc5631a4c8ce9271c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 11:10:07.962036  108549 out.go:179] * Starting "pause-112657" primary control-plane node in "pause-112657" cluster
	I1101 11:10:07.963185  108549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 11:10:07.963238  108549 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-70113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 11:10:07.963251  108549 cache.go:59] Caching tarball of preloaded images
	I1101 11:10:07.963376  108549 preload.go:233] Found /home/jenkins/minikube-integration/21830-70113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 11:10:07.963393  108549 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 11:10:07.963559  108549 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/pause-112657/config.json ...
	I1101 11:10:07.963864  108549 start.go:360] acquireMachinesLock for pause-112657: {Name:mk53a05d125fe91ead2a39c6bbf2ba926c471e2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 11:10:19.820790  108549 start.go:364] duration metric: took 11.856867341s to acquireMachinesLock for "pause-112657"
	I1101 11:10:19.820847  108549 start.go:96] Skipping create...Using existing machine configuration
	I1101 11:10:19.820868  108549 fix.go:54] fixHost starting: 
	I1101 11:10:19.823404  108549 fix.go:112] recreateIfNeeded on pause-112657: state=Running err=<nil>
	W1101 11:10:19.823459  108549 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 11:10:19.825553  108549 out.go:252] * Updating the running kvm2 "pause-112657" VM ...
	I1101 11:10:19.825583  108549 machine.go:94] provisionDockerMachine start ...
	I1101 11:10:19.829597  108549 main.go:143] libmachine: domain pause-112657 has defined MAC address 52:54:00:e8:d5:36 in network mk-pause-112657
	I1101 11:10:19.830079  108549 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:d5:36", ip: ""} in network mk-pause-112657: {Iface:virbr5 ExpiryTime:2025-11-01 12:09:29 +0000 UTC Type:0 Mac:52:54:00:e8:d5:36 Iaid: IPaddr:192.168.83.133 Prefix:24 Hostname:pause-112657 Clientid:01:52:54:00:e8:d5:36}
	I1101 11:10:19.830112  108549 main.go:143] libmachine: domain pause-112657 has defined IP address 192.168.83.133 and MAC address 52:54:00:e8:d5:36 in network mk-pause-112657
	I1101 11:10:19.830327  108549 main.go:143] libmachine: Using SSH client type: native
	I1101 11:10:19.830660  108549 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.83.133 22 <nil> <nil>}
	I1101 11:10:19.830680  108549 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 11:10:19.946836  108549 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-112657
	
	I1101 11:10:19.946877  108549 buildroot.go:166] provisioning hostname "pause-112657"
	I1101 11:10:19.950355  108549 main.go:143] libmachine: domain pause-112657 has defined MAC address 52:54:00:e8:d5:36 in network mk-pause-112657
	I1101 11:10:19.950873  108549 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:d5:36", ip: ""} in network mk-pause-112657: {Iface:virbr5 ExpiryTime:2025-11-01 12:09:29 +0000 UTC Type:0 Mac:52:54:00:e8:d5:36 Iaid: IPaddr:192.168.83.133 Prefix:24 Hostname:pause-112657 Clientid:01:52:54:00:e8:d5:36}
	I1101 11:10:19.950915  108549 main.go:143] libmachine: domain pause-112657 has defined IP address 192.168.83.133 and MAC address 52:54:00:e8:d5:36 in network mk-pause-112657
	I1101 11:10:19.951163  108549 main.go:143] libmachine: Using SSH client type: native
	I1101 11:10:19.951558  108549 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.83.133 22 <nil> <nil>}
	I1101 11:10:19.951581  108549 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-112657 && echo "pause-112657" | sudo tee /etc/hostname
	I1101 11:10:20.088525  108549 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-112657
	
	I1101 11:10:20.091800  108549 main.go:143] libmachine: domain pause-112657 has defined MAC address 52:54:00:e8:d5:36 in network mk-pause-112657
	I1101 11:10:20.092311  108549 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:d5:36", ip: ""} in network mk-pause-112657: {Iface:virbr5 ExpiryTime:2025-11-01 12:09:29 +0000 UTC Type:0 Mac:52:54:00:e8:d5:36 Iaid: IPaddr:192.168.83.133 Prefix:24 Hostname:pause-112657 Clientid:01:52:54:00:e8:d5:36}
	I1101 11:10:20.092337  108549 main.go:143] libmachine: domain pause-112657 has defined IP address 192.168.83.133 and MAC address 52:54:00:e8:d5:36 in network mk-pause-112657
	I1101 11:10:20.092603  108549 main.go:143] libmachine: Using SSH client type: native
	I1101 11:10:20.092804  108549 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.83.133 22 <nil> <nil>}
	I1101 11:10:20.092820  108549 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-112657' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-112657/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-112657' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 11:10:20.206616  108549 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 11:10:20.206656  108549 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21830-70113/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-70113/.minikube}
	I1101 11:10:20.206716  108549 buildroot.go:174] setting up certificates
	I1101 11:10:20.206737  108549 provision.go:84] configureAuth start
	I1101 11:10:20.209946  108549 main.go:143] libmachine: domain pause-112657 has defined MAC address 52:54:00:e8:d5:36 in network mk-pause-112657
	I1101 11:10:20.210508  108549 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:d5:36", ip: ""} in network mk-pause-112657: {Iface:virbr5 ExpiryTime:2025-11-01 12:09:29 +0000 UTC Type:0 Mac:52:54:00:e8:d5:36 Iaid: IPaddr:192.168.83.133 Prefix:24 Hostname:pause-112657 Clientid:01:52:54:00:e8:d5:36}
	I1101 11:10:20.210574  108549 main.go:143] libmachine: domain pause-112657 has defined IP address 192.168.83.133 and MAC address 52:54:00:e8:d5:36 in network mk-pause-112657
	I1101 11:10:20.213153  108549 main.go:143] libmachine: domain pause-112657 has defined MAC address 52:54:00:e8:d5:36 in network mk-pause-112657
	I1101 11:10:20.213565  108549 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:d5:36", ip: ""} in network mk-pause-112657: {Iface:virbr5 ExpiryTime:2025-11-01 12:09:29 +0000 UTC Type:0 Mac:52:54:00:e8:d5:36 Iaid: IPaddr:192.168.83.133 Prefix:24 Hostname:pause-112657 Clientid:01:52:54:00:e8:d5:36}
	I1101 11:10:20.213605  108549 main.go:143] libmachine: domain pause-112657 has defined IP address 192.168.83.133 and MAC address 52:54:00:e8:d5:36 in network mk-pause-112657
	I1101 11:10:20.213761  108549 provision.go:143] copyHostCerts
	I1101 11:10:20.213826  108549 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-70113/.minikube/ca.pem, removing ...
	I1101 11:10:20.213849  108549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-70113/.minikube/ca.pem
	I1101 11:10:20.213919  108549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-70113/.minikube/ca.pem (1082 bytes)
	I1101 11:10:20.214057  108549 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-70113/.minikube/cert.pem, removing ...
	I1101 11:10:20.214068  108549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-70113/.minikube/cert.pem
	I1101 11:10:20.214112  108549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-70113/.minikube/cert.pem (1123 bytes)
	I1101 11:10:20.214212  108549 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-70113/.minikube/key.pem, removing ...
	I1101 11:10:20.214222  108549 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-70113/.minikube/key.pem
	I1101 11:10:20.214254  108549 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-70113/.minikube/key.pem (1675 bytes)
	I1101 11:10:20.214352  108549 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-70113/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca-key.pem org=jenkins.pause-112657 san=[127.0.0.1 192.168.83.133 localhost minikube pause-112657]
	I1101 11:10:20.529295  108549 provision.go:177] copyRemoteCerts
	I1101 11:10:20.529356  108549 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 11:10:20.531845  108549 main.go:143] libmachine: domain pause-112657 has defined MAC address 52:54:00:e8:d5:36 in network mk-pause-112657
	I1101 11:10:20.532212  108549 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:d5:36", ip: ""} in network mk-pause-112657: {Iface:virbr5 ExpiryTime:2025-11-01 12:09:29 +0000 UTC Type:0 Mac:52:54:00:e8:d5:36 Iaid: IPaddr:192.168.83.133 Prefix:24 Hostname:pause-112657 Clientid:01:52:54:00:e8:d5:36}
	I1101 11:10:20.532236  108549 main.go:143] libmachine: domain pause-112657 has defined IP address 192.168.83.133 and MAC address 52:54:00:e8:d5:36 in network mk-pause-112657
	I1101 11:10:20.532367  108549 sshutil.go:53] new ssh client: &{IP:192.168.83.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/pause-112657/id_rsa Username:docker}
	I1101 11:10:20.628547  108549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 11:10:20.665217  108549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1101 11:10:20.700619  108549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 11:10:20.734939  108549 provision.go:87] duration metric: took 528.183208ms to configureAuth
	I1101 11:10:20.734967  108549 buildroot.go:189] setting minikube options for container-runtime
	I1101 11:10:20.735200  108549 config.go:182] Loaded profile config "pause-112657": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:10:20.738715  108549 main.go:143] libmachine: domain pause-112657 has defined MAC address 52:54:00:e8:d5:36 in network mk-pause-112657
	I1101 11:10:20.739180  108549 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:d5:36", ip: ""} in network mk-pause-112657: {Iface:virbr5 ExpiryTime:2025-11-01 12:09:29 +0000 UTC Type:0 Mac:52:54:00:e8:d5:36 Iaid: IPaddr:192.168.83.133 Prefix:24 Hostname:pause-112657 Clientid:01:52:54:00:e8:d5:36}
	I1101 11:10:20.739210  108549 main.go:143] libmachine: domain pause-112657 has defined IP address 192.168.83.133 and MAC address 52:54:00:e8:d5:36 in network mk-pause-112657
	I1101 11:10:20.739388  108549 main.go:143] libmachine: Using SSH client type: native
	I1101 11:10:20.739656  108549 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.83.133 22 <nil> <nil>}
	I1101 11:10:20.739672  108549 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 11:10:26.303948  108549 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 11:10:26.303978  108549 machine.go:97] duration metric: took 6.478386273s to provisionDockerMachine
	I1101 11:10:26.303995  108549 start.go:293] postStartSetup for "pause-112657" (driver="kvm2")
	I1101 11:10:26.304011  108549 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 11:10:26.304088  108549 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 11:10:26.307035  108549 main.go:143] libmachine: domain pause-112657 has defined MAC address 52:54:00:e8:d5:36 in network mk-pause-112657
	I1101 11:10:26.307412  108549 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:d5:36", ip: ""} in network mk-pause-112657: {Iface:virbr5 ExpiryTime:2025-11-01 12:09:29 +0000 UTC Type:0 Mac:52:54:00:e8:d5:36 Iaid: IPaddr:192.168.83.133 Prefix:24 Hostname:pause-112657 Clientid:01:52:54:00:e8:d5:36}
	I1101 11:10:26.307440  108549 main.go:143] libmachine: domain pause-112657 has defined IP address 192.168.83.133 and MAC address 52:54:00:e8:d5:36 in network mk-pause-112657
	I1101 11:10:26.307601  108549 sshutil.go:53] new ssh client: &{IP:192.168.83.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/pause-112657/id_rsa Username:docker}
	I1101 11:10:26.397241  108549 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 11:10:26.403393  108549 info.go:137] Remote host: Buildroot 2025.02
	I1101 11:10:26.403432  108549 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-70113/.minikube/addons for local assets ...
	I1101 11:10:26.403511  108549 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-70113/.minikube/files for local assets ...
	I1101 11:10:26.403644  108549 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-70113/.minikube/files/etc/ssl/certs/739982.pem -> 739982.pem in /etc/ssl/certs
	I1101 11:10:26.403768  108549 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 11:10:26.420891  108549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/files/etc/ssl/certs/739982.pem --> /etc/ssl/certs/739982.pem (1708 bytes)
	I1101 11:10:26.462498  108549 start.go:296] duration metric: took 158.485753ms for postStartSetup
	I1101 11:10:26.462570  108549 fix.go:56] duration metric: took 6.641715384s for fixHost
	I1101 11:10:26.465945  108549 main.go:143] libmachine: domain pause-112657 has defined MAC address 52:54:00:e8:d5:36 in network mk-pause-112657
	I1101 11:10:26.466458  108549 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:d5:36", ip: ""} in network mk-pause-112657: {Iface:virbr5 ExpiryTime:2025-11-01 12:09:29 +0000 UTC Type:0 Mac:52:54:00:e8:d5:36 Iaid: IPaddr:192.168.83.133 Prefix:24 Hostname:pause-112657 Clientid:01:52:54:00:e8:d5:36}
	I1101 11:10:26.466488  108549 main.go:143] libmachine: domain pause-112657 has defined IP address 192.168.83.133 and MAC address 52:54:00:e8:d5:36 in network mk-pause-112657
	I1101 11:10:26.466708  108549 main.go:143] libmachine: Using SSH client type: native
	I1101 11:10:26.466913  108549 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.83.133 22 <nil> <nil>}
	I1101 11:10:26.466928  108549 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1101 11:10:26.583208  108549 main.go:143] libmachine: SSH cmd err, output: <nil>: 1761995426.575596864
	
	I1101 11:10:26.583240  108549 fix.go:216] guest clock: 1761995426.575596864
	I1101 11:10:26.583255  108549 fix.go:229] Guest: 2025-11-01 11:10:26.575596864 +0000 UTC Remote: 2025-11-01 11:10:26.462577023 +0000 UTC m=+18.631307083 (delta=113.019841ms)
	I1101 11:10:26.583279  108549 fix.go:200] guest clock delta is within tolerance: 113.019841ms
	I1101 11:10:26.583287  108549 start.go:83] releasing machines lock for "pause-112657", held for 6.76245753s
	I1101 11:10:26.587070  108549 main.go:143] libmachine: domain pause-112657 has defined MAC address 52:54:00:e8:d5:36 in network mk-pause-112657
	I1101 11:10:26.587594  108549 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:d5:36", ip: ""} in network mk-pause-112657: {Iface:virbr5 ExpiryTime:2025-11-01 12:09:29 +0000 UTC Type:0 Mac:52:54:00:e8:d5:36 Iaid: IPaddr:192.168.83.133 Prefix:24 Hostname:pause-112657 Clientid:01:52:54:00:e8:d5:36}
	I1101 11:10:26.587621  108549 main.go:143] libmachine: domain pause-112657 has defined IP address 192.168.83.133 and MAC address 52:54:00:e8:d5:36 in network mk-pause-112657
	I1101 11:10:26.588220  108549 ssh_runner.go:195] Run: cat /version.json
	I1101 11:10:26.588322  108549 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 11:10:26.591422  108549 main.go:143] libmachine: domain pause-112657 has defined MAC address 52:54:00:e8:d5:36 in network mk-pause-112657
	I1101 11:10:26.591576  108549 main.go:143] libmachine: domain pause-112657 has defined MAC address 52:54:00:e8:d5:36 in network mk-pause-112657
	I1101 11:10:26.591881  108549 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:d5:36", ip: ""} in network mk-pause-112657: {Iface:virbr5 ExpiryTime:2025-11-01 12:09:29 +0000 UTC Type:0 Mac:52:54:00:e8:d5:36 Iaid: IPaddr:192.168.83.133 Prefix:24 Hostname:pause-112657 Clientid:01:52:54:00:e8:d5:36}
	I1101 11:10:26.591914  108549 main.go:143] libmachine: domain pause-112657 has defined IP address 192.168.83.133 and MAC address 52:54:00:e8:d5:36 in network mk-pause-112657
	I1101 11:10:26.591955  108549 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:d5:36", ip: ""} in network mk-pause-112657: {Iface:virbr5 ExpiryTime:2025-11-01 12:09:29 +0000 UTC Type:0 Mac:52:54:00:e8:d5:36 Iaid: IPaddr:192.168.83.133 Prefix:24 Hostname:pause-112657 Clientid:01:52:54:00:e8:d5:36}
	I1101 11:10:26.591987  108549 main.go:143] libmachine: domain pause-112657 has defined IP address 192.168.83.133 and MAC address 52:54:00:e8:d5:36 in network mk-pause-112657
	I1101 11:10:26.592130  108549 sshutil.go:53] new ssh client: &{IP:192.168.83.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/pause-112657/id_rsa Username:docker}
	I1101 11:10:26.592300  108549 sshutil.go:53] new ssh client: &{IP:192.168.83.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/pause-112657/id_rsa Username:docker}
	I1101 11:10:26.677474  108549 ssh_runner.go:195] Run: systemctl --version
	I1101 11:10:26.702715  108549 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 11:10:26.873888  108549 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 11:10:26.883846  108549 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 11:10:26.883924  108549 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 11:10:26.899281  108549 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 11:10:26.899309  108549 start.go:496] detecting cgroup driver to use...
	I1101 11:10:26.899392  108549 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 11:10:26.931571  108549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 11:10:26.956040  108549 docker.go:218] disabling cri-docker service (if available) ...
	I1101 11:10:26.956126  108549 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 11:10:26.981144  108549 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 11:10:27.003248  108549 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 11:10:27.271712  108549 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 11:10:27.484070  108549 docker.go:234] disabling docker service ...
	I1101 11:10:27.484149  108549 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 11:10:27.513224  108549 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 11:10:27.530273  108549 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 11:10:27.723126  108549 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 11:10:27.899158  108549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 11:10:27.916337  108549 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 11:10:27.941847  108549 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 11:10:27.941939  108549 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:10:27.955480  108549 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 11:10:27.955572  108549 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:10:27.970054  108549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:10:27.985787  108549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:10:28.005544  108549 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 11:10:28.024556  108549 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:10:28.039701  108549 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:10:28.054800  108549 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:10:28.069554  108549 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 11:10:28.086396  108549 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 11:10:28.102236  108549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:10:28.282367  108549 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 11:10:35.666526  108549 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.384115962s)
	I1101 11:10:35.666590  108549 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 11:10:35.666659  108549 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 11:10:35.674695  108549 start.go:564] Will wait 60s for crictl version
	I1101 11:10:35.674777  108549 ssh_runner.go:195] Run: which crictl
	I1101 11:10:35.680475  108549 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 11:10:35.732150  108549 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1101 11:10:35.732262  108549 ssh_runner.go:195] Run: crio --version
	I1101 11:10:35.770324  108549 ssh_runner.go:195] Run: crio --version
	I1101 11:10:35.813717  108549 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1101 11:10:35.817995  108549 main.go:143] libmachine: domain pause-112657 has defined MAC address 52:54:00:e8:d5:36 in network mk-pause-112657
	I1101 11:10:35.818406  108549 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e8:d5:36", ip: ""} in network mk-pause-112657: {Iface:virbr5 ExpiryTime:2025-11-01 12:09:29 +0000 UTC Type:0 Mac:52:54:00:e8:d5:36 Iaid: IPaddr:192.168.83.133 Prefix:24 Hostname:pause-112657 Clientid:01:52:54:00:e8:d5:36}
	I1101 11:10:35.818436  108549 main.go:143] libmachine: domain pause-112657 has defined IP address 192.168.83.133 and MAC address 52:54:00:e8:d5:36 in network mk-pause-112657
	I1101 11:10:35.818650  108549 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1101 11:10:35.825200  108549 kubeadm.go:884] updating cluster {Name:pause-112657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-112657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.133 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 11:10:35.825383  108549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 11:10:35.825450  108549 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 11:10:35.881975  108549 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 11:10:35.882004  108549 crio.go:433] Images already preloaded, skipping extraction
	I1101 11:10:35.882062  108549 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 11:10:35.934297  108549 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 11:10:35.934327  108549 cache_images.go:86] Images are preloaded, skipping loading
	I1101 11:10:35.934337  108549 kubeadm.go:935] updating node { 192.168.83.133 8443 v1.34.1 crio true true} ...
	I1101 11:10:35.934465  108549 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-112657 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.133
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-112657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 11:10:35.934571  108549 ssh_runner.go:195] Run: crio config
	I1101 11:10:35.990135  108549 cni.go:84] Creating CNI manager for ""
	I1101 11:10:35.990172  108549 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 11:10:35.990195  108549 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 11:10:35.990238  108549 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.133 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-112657 NodeName:pause-112657 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.133"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.133 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 11:10:35.990438  108549 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.133
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-112657"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.83.133"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.133"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 11:10:35.990525  108549 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 11:10:36.009953  108549 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 11:10:36.010041  108549 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 11:10:36.026488  108549 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1101 11:10:36.057937  108549 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 11:10:36.085553  108549 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1101 11:10:36.116003  108549 ssh_runner.go:195] Run: grep 192.168.83.133	control-plane.minikube.internal$ /etc/hosts
	I1101 11:10:36.122569  108549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:10:36.315031  108549 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:10:36.333418  108549 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/pause-112657 for IP: 192.168.83.133
	I1101 11:10:36.333451  108549 certs.go:195] generating shared ca certs ...
	I1101 11:10:36.333493  108549 certs.go:227] acquiring lock for ca certs: {Name:mk20731b316fbc22c351241cefc40924880eeba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:10:36.333689  108549 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-70113/.minikube/ca.key
	I1101 11:10:36.333753  108549 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.key
	I1101 11:10:36.333766  108549 certs.go:257] generating profile certs ...
	I1101 11:10:36.333875  108549 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/pause-112657/client.key
	I1101 11:10:36.333959  108549 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/pause-112657/apiserver.key.42547181
	I1101 11:10:36.334017  108549 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/pause-112657/proxy-client.key
	I1101 11:10:36.334201  108549 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/73998.pem (1338 bytes)
	W1101 11:10:36.334250  108549 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-70113/.minikube/certs/73998_empty.pem, impossibly tiny 0 bytes
	I1101 11:10:36.334263  108549 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 11:10:36.334296  108549 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem (1082 bytes)
	I1101 11:10:36.334330  108549 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/cert.pem (1123 bytes)
	I1101 11:10:36.334364  108549 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/key.pem (1675 bytes)
	I1101 11:10:36.334414  108549 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/files/etc/ssl/certs/739982.pem (1708 bytes)
	I1101 11:10:36.335504  108549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 11:10:36.380106  108549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 11:10:36.421667  108549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 11:10:36.463184  108549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 11:10:36.501105  108549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/pause-112657/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1101 11:10:36.538314  108549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/pause-112657/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 11:10:36.576015  108549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/pause-112657/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 11:10:36.720975  108549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/pause-112657/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 11:10:36.813819  108549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/files/etc/ssl/certs/739982.pem --> /usr/share/ca-certificates/739982.pem (1708 bytes)
	I1101 11:10:36.938717  108549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 11:10:37.047698  108549 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/certs/73998.pem --> /usr/share/ca-certificates/73998.pem (1338 bytes)
	I1101 11:10:37.113674  108549 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 11:10:37.166597  108549 ssh_runner.go:195] Run: openssl version
	I1101 11:10:37.179595  108549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/739982.pem && ln -fs /usr/share/ca-certificates/739982.pem /etc/ssl/certs/739982.pem"
	I1101 11:10:37.216908  108549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/739982.pem
	I1101 11:10:37.234749  108549 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:03 /usr/share/ca-certificates/739982.pem
	I1101 11:10:37.234826  108549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/739982.pem
	I1101 11:10:37.255289  108549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/739982.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 11:10:37.281863  108549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 11:10:37.313495  108549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:10:37.325379  108549 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:50 /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:10:37.325465  108549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:10:37.345087  108549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 11:10:37.373836  108549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/73998.pem && ln -fs /usr/share/ca-certificates/73998.pem /etc/ssl/certs/73998.pem"
	I1101 11:10:37.408711  108549 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/73998.pem
	I1101 11:10:37.419584  108549 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:03 /usr/share/ca-certificates/73998.pem
	I1101 11:10:37.419659  108549 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/73998.pem
	I1101 11:10:37.443237  108549 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/73998.pem /etc/ssl/certs/51391683.0"
	I1101 11:10:37.492809  108549 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 11:10:37.520434  108549 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 11:10:37.533048  108549 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 11:10:37.549236  108549 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 11:10:37.563869  108549 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 11:10:37.576122  108549 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 11:10:37.595318  108549 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 11:10:37.621722  108549 kubeadm.go:401] StartCluster: {Name:pause-112657 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-112657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.133 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:10:37.621890  108549 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 11:10:37.621995  108549 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 11:10:37.763122  108549 cri.go:89] found id: "4f557d4e8f14008c0f3af610b5e7d21f6bc34a9ef9b305c98652539ec8b3a059"
	I1101 11:10:37.763150  108549 cri.go:89] found id: "a5f4bd825d40113738943da3b6f7b2025cb9516c6519f050d0f26455627a6e67"
	I1101 11:10:37.763156  108549 cri.go:89] found id: "362c37d6f3cbe57d64055d9d200a6b5d819a9ee4dde9b2fc09af53b1741e8b3b"
	I1101 11:10:37.763161  108549 cri.go:89] found id: "b140a0c1d767d4f7ece6853aeb5d9d8f8f58e137400cc9bf3910f496e71c1b79"
	I1101 11:10:37.763166  108549 cri.go:89] found id: "e6255bd1d028d6e695d0e2603839a8b912279be15f15055ab2fdcac158a767f2"
	I1101 11:10:37.763170  108549 cri.go:89] found id: "17af9600e453ef15684f26ea76667b808bbd1ca091d1d10572bc54428e7aa950"
	I1101 11:10:37.763174  108549 cri.go:89] found id: "c2f04ed6767836b908b324e86b268647055e7e90747f6a36ae0bf4e086b7e5a5"
	I1101 11:10:37.763178  108549 cri.go:89] found id: "ebc5a0c73a4110e676ab0c3f4c380b85618807e05f7a96b71f987beeff81cb68"
	I1101 11:10:37.763182  108549 cri.go:89] found id: ""
	I1101 11:10:37.763235  108549 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-112657 -n pause-112657
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-112657 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-112657 logs -n 25: (1.635509625s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                          ARGS                                                                                                           │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p kubernetes-upgrade-272276                                                                                                                                                                                            │ kubernetes-upgrade-272276 │ jenkins │ v1.37.0 │ 01 Nov 25 11:08 UTC │ 01 Nov 25 11:08 UTC │
	│ delete  │ -p running-upgrade-768085                                                                                                                                                                                               │ running-upgrade-768085    │ jenkins │ v1.37.0 │ 01 Nov 25 11:08 UTC │ 01 Nov 25 11:08 UTC │
	│ start   │ -p kubernetes-upgrade-272276 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                  │ kubernetes-upgrade-272276 │ jenkins │ v1.37.0 │ 01 Nov 25 11:08 UTC │ 01 Nov 25 11:08 UTC │
	│ ssh     │ -p NoKubernetes-028702 sudo systemctl is-active --quiet service kubelet                                                                                                                                                 │ NoKubernetes-028702       │ jenkins │ v1.37.0 │ 01 Nov 25 11:08 UTC │                     │
	│ delete  │ -p NoKubernetes-028702                                                                                                                                                                                                  │ NoKubernetes-028702       │ jenkins │ v1.37.0 │ 01 Nov 25 11:08 UTC │ 01 Nov 25 11:08 UTC │
	│ start   │ -p stopped-upgrade-391167 --memory=3072 --vm-driver=kvm2  --container-runtime=crio                                                                                                                                      │ stopped-upgrade-391167    │ jenkins │ v1.32.0 │ 01 Nov 25 11:08 UTC │ 01 Nov 25 11:09 UTC │
	│ start   │ -p guest-290834 --no-kubernetes --driver=kvm2  --container-runtime=crio                                                                                                                                                 │ guest-290834              │ jenkins │ v1.37.0 │ 01 Nov 25 11:08 UTC │ 01 Nov 25 11:09 UTC │
	│ start   │ -p kubernetes-upgrade-272276 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio                                                                                                         │ kubernetes-upgrade-272276 │ jenkins │ v1.37.0 │ 01 Nov 25 11:08 UTC │                     │
	│ start   │ -p kubernetes-upgrade-272276 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                  │ kubernetes-upgrade-272276 │ jenkins │ v1.37.0 │ 01 Nov 25 11:08 UTC │ 01 Nov 25 11:09 UTC │
	│ start   │ -p pause-112657 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                                                                                 │ pause-112657              │ jenkins │ v1.37.0 │ 01 Nov 25 11:09 UTC │ 01 Nov 25 11:10 UTC │
	│ stop    │ stopped-upgrade-391167 stop                                                                                                                                                                                             │ stopped-upgrade-391167    │ jenkins │ v1.32.0 │ 01 Nov 25 11:09 UTC │ 01 Nov 25 11:09 UTC │
	│ start   │ -p stopped-upgrade-391167 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                  │ stopped-upgrade-391167    │ jenkins │ v1.37.0 │ 01 Nov 25 11:09 UTC │ 01 Nov 25 11:10 UTC │
	│ start   │ -p cert-expiration-917729 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio                                                                                                                 │ cert-expiration-917729    │ jenkins │ v1.37.0 │ 01 Nov 25 11:09 UTC │ 01 Nov 25 11:10 UTC │
	│ delete  │ -p kubernetes-upgrade-272276                                                                                                                                                                                            │ kubernetes-upgrade-272276 │ jenkins │ v1.37.0 │ 01 Nov 25 11:09 UTC │ 01 Nov 25 11:09 UTC │
	│ start   │ -p cert-options-970426 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio │ cert-options-970426       │ jenkins │ v1.37.0 │ 01 Nov 25 11:09 UTC │ 01 Nov 25 11:10 UTC │
	│ start   │ -p pause-112657 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                          │ pause-112657              │ jenkins │ v1.37.0 │ 01 Nov 25 11:10 UTC │ 01 Nov 25 11:11 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-391167 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                             │ stopped-upgrade-391167    │ jenkins │ v1.37.0 │ 01 Nov 25 11:10 UTC │                     │
	│ delete  │ -p stopped-upgrade-391167                                                                                                                                                                                               │ stopped-upgrade-391167    │ jenkins │ v1.37.0 │ 01 Nov 25 11:10 UTC │ 01 Nov 25 11:10 UTC │
	│ start   │ -p auto-216814 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio                                                                                                   │ auto-216814               │ jenkins │ v1.37.0 │ 01 Nov 25 11:10 UTC │                     │
	│ delete  │ -p cert-expiration-917729                                                                                                                                                                                               │ cert-expiration-917729    │ jenkins │ v1.37.0 │ 01 Nov 25 11:10 UTC │ 01 Nov 25 11:10 UTC │
	│ start   │ -p kindnet-216814 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio                                                                                  │ kindnet-216814            │ jenkins │ v1.37.0 │ 01 Nov 25 11:10 UTC │                     │
	│ ssh     │ cert-options-970426 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                             │ cert-options-970426       │ jenkins │ v1.37.0 │ 01 Nov 25 11:10 UTC │ 01 Nov 25 11:10 UTC │
	│ ssh     │ -p cert-options-970426 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                           │ cert-options-970426       │ jenkins │ v1.37.0 │ 01 Nov 25 11:10 UTC │ 01 Nov 25 11:10 UTC │
	│ delete  │ -p cert-options-970426                                                                                                                                                                                                  │ cert-options-970426       │ jenkins │ v1.37.0 │ 01 Nov 25 11:10 UTC │ 01 Nov 25 11:10 UTC │
	│ start   │ -p calico-216814 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio                                                                                    │ calico-216814             │ jenkins │ v1.37.0 │ 01 Nov 25 11:10 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 11:10:44
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 11:10:44.045405  109110 out.go:360] Setting OutFile to fd 1 ...
	I1101 11:10:44.045672  109110 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:10:44.045683  109110 out.go:374] Setting ErrFile to fd 2...
	I1101 11:10:44.045687  109110 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:10:44.045903  109110 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-70113/.minikube/bin
	I1101 11:10:44.046412  109110 out.go:368] Setting JSON to false
	I1101 11:10:44.047269  109110 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10392,"bootTime":1761985052,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 11:10:44.047321  109110 start.go:143] virtualization: kvm guest
	I1101 11:10:44.049367  109110 out.go:179] * [calico-216814] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 11:10:44.051361  109110 notify.go:221] Checking for updates...
	I1101 11:10:44.051397  109110 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 11:10:44.053336  109110 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 11:10:44.054757  109110 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-70113/kubeconfig
	I1101 11:10:44.056093  109110 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-70113/.minikube
	I1101 11:10:44.057430  109110 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 11:10:44.058657  109110 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 11:10:44.060525  109110 config.go:182] Loaded profile config "auto-216814": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:10:44.060671  109110 config.go:182] Loaded profile config "guest-290834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1101 11:10:44.060801  109110 config.go:182] Loaded profile config "kindnet-216814": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:10:44.060977  109110 config.go:182] Loaded profile config "pause-112657": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:10:44.061116  109110 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 11:10:44.098033  109110 out.go:179] * Using the kvm2 driver based on user configuration
	I1101 11:10:44.099290  109110 start.go:309] selected driver: kvm2
	I1101 11:10:44.099325  109110 start.go:930] validating driver "kvm2" against <nil>
	I1101 11:10:44.099343  109110 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 11:10:44.100137  109110 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 11:10:44.100387  109110 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 11:10:44.100426  109110 cni.go:84] Creating CNI manager for "calico"
	I1101 11:10:44.100437  109110 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1101 11:10:44.100488  109110 start.go:353] cluster config:
	{Name:calico-216814 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-216814 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1101 11:10:44.100639  109110 iso.go:125] acquiring lock: {Name:mk49d9a272bb99d336f82dfc5631a4c8ce9271c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 11:10:44.102238  109110 out.go:179] * Starting "calico-216814" primary control-plane node in "calico-216814" cluster
	I1101 11:10:43.189626  108661 main.go:143] libmachine: domain auto-216814 has defined MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:43.190362  108661 main.go:143] libmachine: no network interface addresses found for domain auto-216814 (source=lease)
	I1101 11:10:43.190382  108661 main.go:143] libmachine: trying to list again with source=arp
	I1101 11:10:43.191757  108661 main.go:143] libmachine: unable to find current IP address of domain auto-216814 in network mk-auto-216814 (interfaces detected: [])
	I1101 11:10:43.191804  108661 retry.go:31] will retry after 4.031370035s: waiting for domain to come up
	I1101 11:10:47.228056  108661 main.go:143] libmachine: domain auto-216814 has defined MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:47.228746  108661 main.go:143] libmachine: domain auto-216814 has current primary IP address 192.168.39.236 and MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:47.228765  108661 main.go:143] libmachine: found domain IP: 192.168.39.236
	I1101 11:10:47.228772  108661 main.go:143] libmachine: reserving static IP address...
	I1101 11:10:47.229498  108661 main.go:143] libmachine: unable to find host DHCP lease matching {name: "auto-216814", mac: "52:54:00:37:0c:61", ip: "192.168.39.236"} in network mk-auto-216814
	I1101 11:10:48.878979  108776 start.go:364] duration metric: took 31.424039299s to acquireMachinesLock for "kindnet-216814"
	I1101 11:10:48.879053  108776 start.go:93] Provisioning new machine with config: &{Name:kindnet-216814 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.34.1 ClusterName:kindnet-216814 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 11:10:48.879213  108776 start.go:125] createHost starting for "" (driver="kvm2")
	I1101 11:10:44.103359  109110 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 11:10:44.103397  109110 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-70113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 11:10:44.103405  109110 cache.go:59] Caching tarball of preloaded images
	I1101 11:10:44.103474  109110 preload.go:233] Found /home/jenkins/minikube-integration/21830-70113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 11:10:44.103485  109110 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 11:10:44.103603  109110 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/calico-216814/config.json ...
	I1101 11:10:44.103623  109110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/calico-216814/config.json: {Name:mkfde681c122cd962ee1bcd79b983564ae0573cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:10:44.103765  109110 start.go:360] acquireMachinesLock for calico-216814: {Name:mk53a05d125fe91ead2a39c6bbf2ba926c471e2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 11:10:47.436681  108661 main.go:143] libmachine: reserved static IP address 192.168.39.236 for domain auto-216814
	I1101 11:10:47.436716  108661 main.go:143] libmachine: waiting for SSH...
	I1101 11:10:47.436725  108661 main.go:143] libmachine: Getting to WaitForSSH function...
	I1101 11:10:47.440423  108661 main.go:143] libmachine: domain auto-216814 has defined MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:47.440898  108661 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:37:0c:61", ip: ""} in network mk-auto-216814: {Iface:virbr1 ExpiryTime:2025-11-01 12:10:44 +0000 UTC Type:0 Mac:52:54:00:37:0c:61 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:minikube Clientid:01:52:54:00:37:0c:61}
	I1101 11:10:47.440924  108661 main.go:143] libmachine: domain auto-216814 has defined IP address 192.168.39.236 and MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:47.441130  108661 main.go:143] libmachine: Using SSH client type: native
	I1101 11:10:47.441414  108661 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.236 22 <nil> <nil>}
	I1101 11:10:47.441428  108661 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1101 11:10:47.550862  108661 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 11:10:47.551260  108661 main.go:143] libmachine: domain creation complete
	I1101 11:10:47.552783  108661 machine.go:94] provisionDockerMachine start ...
	I1101 11:10:47.555369  108661 main.go:143] libmachine: domain auto-216814 has defined MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:47.555748  108661 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:37:0c:61", ip: ""} in network mk-auto-216814: {Iface:virbr1 ExpiryTime:2025-11-01 12:10:44 +0000 UTC Type:0 Mac:52:54:00:37:0c:61 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:auto-216814 Clientid:01:52:54:00:37:0c:61}
	I1101 11:10:47.555772  108661 main.go:143] libmachine: domain auto-216814 has defined IP address 192.168.39.236 and MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:47.555951  108661 main.go:143] libmachine: Using SSH client type: native
	I1101 11:10:47.556140  108661 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.236 22 <nil> <nil>}
	I1101 11:10:47.556150  108661 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 11:10:47.663990  108661 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1101 11:10:47.664023  108661 buildroot.go:166] provisioning hostname "auto-216814"
	I1101 11:10:47.667067  108661 main.go:143] libmachine: domain auto-216814 has defined MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:47.667466  108661 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:37:0c:61", ip: ""} in network mk-auto-216814: {Iface:virbr1 ExpiryTime:2025-11-01 12:10:44 +0000 UTC Type:0 Mac:52:54:00:37:0c:61 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:auto-216814 Clientid:01:52:54:00:37:0c:61}
	I1101 11:10:47.667491  108661 main.go:143] libmachine: domain auto-216814 has defined IP address 192.168.39.236 and MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:47.667682  108661 main.go:143] libmachine: Using SSH client type: native
	I1101 11:10:47.667923  108661 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.236 22 <nil> <nil>}
	I1101 11:10:47.667938  108661 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-216814 && echo "auto-216814" | sudo tee /etc/hostname
	I1101 11:10:47.792003  108661 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-216814
	
	I1101 11:10:47.795497  108661 main.go:143] libmachine: domain auto-216814 has defined MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:47.795942  108661 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:37:0c:61", ip: ""} in network mk-auto-216814: {Iface:virbr1 ExpiryTime:2025-11-01 12:10:44 +0000 UTC Type:0 Mac:52:54:00:37:0c:61 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:auto-216814 Clientid:01:52:54:00:37:0c:61}
	I1101 11:10:47.795976  108661 main.go:143] libmachine: domain auto-216814 has defined IP address 192.168.39.236 and MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:47.796168  108661 main.go:143] libmachine: Using SSH client type: native
	I1101 11:10:47.796438  108661 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.236 22 <nil> <nil>}
	I1101 11:10:47.796464  108661 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-216814' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-216814/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-216814' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 11:10:47.917458  108661 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 11:10:47.917488  108661 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21830-70113/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-70113/.minikube}
	I1101 11:10:47.917518  108661 buildroot.go:174] setting up certificates
	I1101 11:10:47.917553  108661 provision.go:84] configureAuth start
	I1101 11:10:47.920690  108661 main.go:143] libmachine: domain auto-216814 has defined MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:47.921157  108661 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:37:0c:61", ip: ""} in network mk-auto-216814: {Iface:virbr1 ExpiryTime:2025-11-01 12:10:44 +0000 UTC Type:0 Mac:52:54:00:37:0c:61 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:auto-216814 Clientid:01:52:54:00:37:0c:61}
	I1101 11:10:47.921187  108661 main.go:143] libmachine: domain auto-216814 has defined IP address 192.168.39.236 and MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:47.923911  108661 main.go:143] libmachine: domain auto-216814 has defined MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:47.924334  108661 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:37:0c:61", ip: ""} in network mk-auto-216814: {Iface:virbr1 ExpiryTime:2025-11-01 12:10:44 +0000 UTC Type:0 Mac:52:54:00:37:0c:61 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:auto-216814 Clientid:01:52:54:00:37:0c:61}
	I1101 11:10:47.924359  108661 main.go:143] libmachine: domain auto-216814 has defined IP address 192.168.39.236 and MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:47.924524  108661 provision.go:143] copyHostCerts
	I1101 11:10:47.924601  108661 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-70113/.minikube/ca.pem, removing ...
	I1101 11:10:47.924626  108661 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-70113/.minikube/ca.pem
	I1101 11:10:47.924713  108661 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-70113/.minikube/ca.pem (1082 bytes)
	I1101 11:10:47.924862  108661 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-70113/.minikube/cert.pem, removing ...
	I1101 11:10:47.924877  108661 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-70113/.minikube/cert.pem
	I1101 11:10:47.924926  108661 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-70113/.minikube/cert.pem (1123 bytes)
	I1101 11:10:47.925016  108661 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-70113/.minikube/key.pem, removing ...
	I1101 11:10:47.925024  108661 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-70113/.minikube/key.pem
	I1101 11:10:47.925057  108661 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-70113/.minikube/key.pem (1675 bytes)
	I1101 11:10:47.925134  108661 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-70113/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca-key.pem org=jenkins.auto-216814 san=[127.0.0.1 192.168.39.236 auto-216814 localhost minikube]
	I1101 11:10:48.136843  108661 provision.go:177] copyRemoteCerts
	I1101 11:10:48.136920  108661 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 11:10:48.139963  108661 main.go:143] libmachine: domain auto-216814 has defined MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:48.140367  108661 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:37:0c:61", ip: ""} in network mk-auto-216814: {Iface:virbr1 ExpiryTime:2025-11-01 12:10:44 +0000 UTC Type:0 Mac:52:54:00:37:0c:61 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:auto-216814 Clientid:01:52:54:00:37:0c:61}
	I1101 11:10:48.140397  108661 main.go:143] libmachine: domain auto-216814 has defined IP address 192.168.39.236 and MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:48.140580  108661 sshutil.go:53] new ssh client: &{IP:192.168.39.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/auto-216814/id_rsa Username:docker}
	I1101 11:10:48.227500  108661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1101 11:10:48.260736  108661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 11:10:48.295336  108661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 11:10:48.326894  108661 provision.go:87] duration metric: took 409.320663ms to configureAuth
	I1101 11:10:48.326932  108661 buildroot.go:189] setting minikube options for container-runtime
	I1101 11:10:48.327152  108661 config.go:182] Loaded profile config "auto-216814": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:10:48.330414  108661 main.go:143] libmachine: domain auto-216814 has defined MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:48.330812  108661 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:37:0c:61", ip: ""} in network mk-auto-216814: {Iface:virbr1 ExpiryTime:2025-11-01 12:10:44 +0000 UTC Type:0 Mac:52:54:00:37:0c:61 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:auto-216814 Clientid:01:52:54:00:37:0c:61}
	I1101 11:10:48.330848  108661 main.go:143] libmachine: domain auto-216814 has defined IP address 192.168.39.236 and MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:48.331047  108661 main.go:143] libmachine: Using SSH client type: native
	I1101 11:10:48.331253  108661 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.236 22 <nil> <nil>}
	I1101 11:10:48.331268  108661 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 11:10:48.608801  108661 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 11:10:48.608846  108661 machine.go:97] duration metric: took 1.056044143s to provisionDockerMachine
	I1101 11:10:48.608857  108661 client.go:176] duration metric: took 22.006931282s to LocalClient.Create
	I1101 11:10:48.608874  108661 start.go:167] duration metric: took 22.007003584s to libmachine.API.Create "auto-216814"
	I1101 11:10:48.608886  108661 start.go:293] postStartSetup for "auto-216814" (driver="kvm2")
	I1101 11:10:48.608898  108661 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 11:10:48.608982  108661 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 11:10:48.612238  108661 main.go:143] libmachine: domain auto-216814 has defined MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:48.612737  108661 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:37:0c:61", ip: ""} in network mk-auto-216814: {Iface:virbr1 ExpiryTime:2025-11-01 12:10:44 +0000 UTC Type:0 Mac:52:54:00:37:0c:61 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:auto-216814 Clientid:01:52:54:00:37:0c:61}
	I1101 11:10:48.612773  108661 main.go:143] libmachine: domain auto-216814 has defined IP address 192.168.39.236 and MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:48.612941  108661 sshutil.go:53] new ssh client: &{IP:192.168.39.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/auto-216814/id_rsa Username:docker}
	I1101 11:10:48.702368  108661 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 11:10:48.708323  108661 info.go:137] Remote host: Buildroot 2025.02
	I1101 11:10:48.708353  108661 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-70113/.minikube/addons for local assets ...
	I1101 11:10:48.708417  108661 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-70113/.minikube/files for local assets ...
	I1101 11:10:48.708488  108661 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-70113/.minikube/files/etc/ssl/certs/739982.pem -> 739982.pem in /etc/ssl/certs
	I1101 11:10:48.708599  108661 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 11:10:48.721869  108661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/files/etc/ssl/certs/739982.pem --> /etc/ssl/certs/739982.pem (1708 bytes)
	I1101 11:10:48.760855  108661 start.go:296] duration metric: took 151.950822ms for postStartSetup
	I1101 11:10:48.764227  108661 main.go:143] libmachine: domain auto-216814 has defined MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:48.764790  108661 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:37:0c:61", ip: ""} in network mk-auto-216814: {Iface:virbr1 ExpiryTime:2025-11-01 12:10:44 +0000 UTC Type:0 Mac:52:54:00:37:0c:61 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:auto-216814 Clientid:01:52:54:00:37:0c:61}
	I1101 11:10:48.764826  108661 main.go:143] libmachine: domain auto-216814 has defined IP address 192.168.39.236 and MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:48.765109  108661 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/config.json ...
	I1101 11:10:48.765303  108661 start.go:128] duration metric: took 22.181722285s to createHost
	I1101 11:10:48.768246  108661 main.go:143] libmachine: domain auto-216814 has defined MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:48.768675  108661 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:37:0c:61", ip: ""} in network mk-auto-216814: {Iface:virbr1 ExpiryTime:2025-11-01 12:10:44 +0000 UTC Type:0 Mac:52:54:00:37:0c:61 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:auto-216814 Clientid:01:52:54:00:37:0c:61}
	I1101 11:10:48.768705  108661 main.go:143] libmachine: domain auto-216814 has defined IP address 192.168.39.236 and MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:48.768939  108661 main.go:143] libmachine: Using SSH client type: native
	I1101 11:10:48.769172  108661 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.236 22 <nil> <nil>}
	I1101 11:10:48.769184  108661 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1101 11:10:48.878712  108661 main.go:143] libmachine: SSH cmd err, output: <nil>: 1761995448.840573856
	
	I1101 11:10:48.878735  108661 fix.go:216] guest clock: 1761995448.840573856
	I1101 11:10:48.878745  108661 fix.go:229] Guest: 2025-11-01 11:10:48.840573856 +0000 UTC Remote: 2025-11-01 11:10:48.765314896 +0000 UTC m=+36.569731817 (delta=75.25896ms)
	I1101 11:10:48.878765  108661 fix.go:200] guest clock delta is within tolerance: 75.25896ms
	I1101 11:10:48.878771  108661 start.go:83] releasing machines lock for "auto-216814", held for 22.295365601s
	I1101 11:10:48.882551  108661 main.go:143] libmachine: domain auto-216814 has defined MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:48.883219  108661 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:37:0c:61", ip: ""} in network mk-auto-216814: {Iface:virbr1 ExpiryTime:2025-11-01 12:10:44 +0000 UTC Type:0 Mac:52:54:00:37:0c:61 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:auto-216814 Clientid:01:52:54:00:37:0c:61}
	I1101 11:10:48.883257  108661 main.go:143] libmachine: domain auto-216814 has defined IP address 192.168.39.236 and MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:48.883919  108661 ssh_runner.go:195] Run: cat /version.json
	I1101 11:10:48.884043  108661 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 11:10:48.887232  108661 main.go:143] libmachine: domain auto-216814 has defined MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:48.887417  108661 main.go:143] libmachine: domain auto-216814 has defined MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:48.887673  108661 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:37:0c:61", ip: ""} in network mk-auto-216814: {Iface:virbr1 ExpiryTime:2025-11-01 12:10:44 +0000 UTC Type:0 Mac:52:54:00:37:0c:61 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:auto-216814 Clientid:01:52:54:00:37:0c:61}
	I1101 11:10:48.887705  108661 main.go:143] libmachine: domain auto-216814 has defined IP address 192.168.39.236 and MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:48.887854  108661 sshutil.go:53] new ssh client: &{IP:192.168.39.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/auto-216814/id_rsa Username:docker}
	I1101 11:10:48.887880  108661 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:37:0c:61", ip: ""} in network mk-auto-216814: {Iface:virbr1 ExpiryTime:2025-11-01 12:10:44 +0000 UTC Type:0 Mac:52:54:00:37:0c:61 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:auto-216814 Clientid:01:52:54:00:37:0c:61}
	I1101 11:10:48.887921  108661 main.go:143] libmachine: domain auto-216814 has defined IP address 192.168.39.236 and MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:48.888080  108661 sshutil.go:53] new ssh client: &{IP:192.168.39.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/auto-216814/id_rsa Username:docker}
	I1101 11:10:48.976751  108661 ssh_runner.go:195] Run: systemctl --version
	I1101 11:10:49.005631  108661 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 11:10:49.182607  108661 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 11:10:49.192823  108661 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 11:10:49.192917  108661 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 11:10:49.219682  108661 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 11:10:49.219714  108661 start.go:496] detecting cgroup driver to use...
	I1101 11:10:49.219801  108661 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 11:10:49.239349  108661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 11:10:49.257315  108661 docker.go:218] disabling cri-docker service (if available) ...
	I1101 11:10:49.257372  108661 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 11:10:49.276366  108661 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 11:10:49.297994  108661 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 11:10:49.455446  108661 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 11:10:49.673145  108661 docker.go:234] disabling docker service ...
	I1101 11:10:49.673242  108661 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 11:10:49.694708  108661 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 11:10:49.712035  108661 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 11:10:49.872134  108661 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 11:10:50.033720  108661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 11:10:50.056124  108661 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 11:10:50.081761  108661 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 11:10:50.081841  108661 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:10:50.099457  108661 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 11:10:50.099551  108661 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:10:50.113560  108661 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:10:50.127515  108661 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:10:50.142996  108661 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 11:10:50.157665  108661 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:10:50.176378  108661 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:10:50.202945  108661 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:10:50.216889  108661 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 11:10:50.228906  108661 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 11:10:50.228973  108661 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 11:10:50.257141  108661 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 11:10:50.271423  108661 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:10:50.446795  108661 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 11:10:50.581601  108661 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 11:10:50.581691  108661 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 11:10:50.588255  108661 start.go:564] Will wait 60s for crictl version
	I1101 11:10:50.588323  108661 ssh_runner.go:195] Run: which crictl
	I1101 11:10:50.592919  108661 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 11:10:50.640978  108661 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1101 11:10:50.641065  108661 ssh_runner.go:195] Run: crio --version
	I1101 11:10:50.682377  108661 ssh_runner.go:195] Run: crio --version
	I1101 11:10:50.722263  108661 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1101 11:10:50.727023  108661 main.go:143] libmachine: domain auto-216814 has defined MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:50.727689  108661 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:37:0c:61", ip: ""} in network mk-auto-216814: {Iface:virbr1 ExpiryTime:2025-11-01 12:10:44 +0000 UTC Type:0 Mac:52:54:00:37:0c:61 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:auto-216814 Clientid:01:52:54:00:37:0c:61}
	I1101 11:10:50.727734  108661 main.go:143] libmachine: domain auto-216814 has defined IP address 192.168.39.236 and MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:50.728058  108661 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1101 11:10:50.733289  108661 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 11:10:50.749696  108661 kubeadm.go:884] updating cluster {Name:auto-216814 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:auto-216814 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.236 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 11:10:50.749855  108661 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 11:10:50.749927  108661 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 11:10:50.793414  108661 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1101 11:10:50.793486  108661 ssh_runner.go:195] Run: which lz4
	I1101 11:10:50.798740  108661 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1101 11:10:50.804437  108661 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 11:10:50.804478  108661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1101 11:10:48.881051  108776 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1101 11:10:48.881331  108776 start.go:159] libmachine.API.Create for "kindnet-216814" (driver="kvm2")
	I1101 11:10:48.881382  108776 client.go:173] LocalClient.Create starting
	I1101 11:10:48.881480  108776 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem
	I1101 11:10:48.881555  108776 main.go:143] libmachine: Decoding PEM data...
	I1101 11:10:48.881586  108776 main.go:143] libmachine: Parsing certificate...
	I1101 11:10:48.881710  108776 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21830-70113/.minikube/certs/cert.pem
	I1101 11:10:48.881748  108776 main.go:143] libmachine: Decoding PEM data...
	I1101 11:10:48.881760  108776 main.go:143] libmachine: Parsing certificate...
	I1101 11:10:48.882336  108776 main.go:143] libmachine: creating domain...
	I1101 11:10:48.882352  108776 main.go:143] libmachine: creating network...
	I1101 11:10:48.884321  108776 main.go:143] libmachine: found existing default network
	I1101 11:10:48.884717  108776 main.go:143] libmachine: <network connections='3'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 11:10:48.885818  108776 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:df:ad:39} reservation:<nil>}
	I1101 11:10:48.886435  108776 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:6f:07:43} reservation:<nil>}
	I1101 11:10:48.887503  108776 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ebca40}
	I1101 11:10:48.887631  108776 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-kindnet-216814</name>
	  <dns enable='no'/>
	  <ip address='192.168.61.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.61.2' end='192.168.61.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 11:10:48.896009  108776 main.go:143] libmachine: creating private network mk-kindnet-216814 192.168.61.0/24...
	I1101 11:10:48.983442  108776 main.go:143] libmachine: private network mk-kindnet-216814 192.168.61.0/24 created
	I1101 11:10:48.983822  108776 main.go:143] libmachine: <network>
	  <name>mk-kindnet-216814</name>
	  <uuid>e1a0d679-49d4-4ef6-a2f5-d7355a12eff1</uuid>
	  <bridge name='virbr3' stp='on' delay='0'/>
	  <mac address='52:54:00:4f:f0:de'/>
	  <dns enable='no'/>
	  <ip address='192.168.61.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.61.2' end='192.168.61.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 11:10:48.983866  108776 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21830-70113/.minikube/machines/kindnet-216814 ...
	I1101 11:10:48.983905  108776 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21830-70113/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso
	I1101 11:10:48.983922  108776 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21830-70113/.minikube
	I1101 11:10:48.984006  108776 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21830-70113/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21830-70113/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso...
	I1101 11:10:49.252250  108776 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21830-70113/.minikube/machines/kindnet-216814/id_rsa...
	I1101 11:10:49.629882  108776 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21830-70113/.minikube/machines/kindnet-216814/kindnet-216814.rawdisk...
	I1101 11:10:49.629926  108776 main.go:143] libmachine: Writing magic tar header
	I1101 11:10:49.629944  108776 main.go:143] libmachine: Writing SSH key tar header
	I1101 11:10:49.630018  108776 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21830-70113/.minikube/machines/kindnet-216814 ...
	I1101 11:10:49.630085  108776 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21830-70113/.minikube/machines/kindnet-216814
	I1101 11:10:49.630140  108776 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21830-70113/.minikube/machines/kindnet-216814 (perms=drwx------)
	I1101 11:10:49.630160  108776 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21830-70113/.minikube/machines
	I1101 11:10:49.630171  108776 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21830-70113/.minikube/machines (perms=drwxr-xr-x)
	I1101 11:10:49.630183  108776 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21830-70113/.minikube
	I1101 11:10:49.630192  108776 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21830-70113/.minikube (perms=drwxr-xr-x)
	I1101 11:10:49.630203  108776 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21830-70113
	I1101 11:10:49.630211  108776 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21830-70113 (perms=drwxrwxr-x)
	I1101 11:10:49.630221  108776 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1101 11:10:49.630229  108776 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1101 11:10:49.630236  108776 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1101 11:10:49.630243  108776 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1101 11:10:49.630252  108776 main.go:143] libmachine: checking permissions on dir: /home
	I1101 11:10:49.630272  108776 main.go:143] libmachine: skipping /home - not owner
	I1101 11:10:49.630282  108776 main.go:143] libmachine: defining domain...
	I1101 11:10:49.631766  108776 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>kindnet-216814</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21830-70113/.minikube/machines/kindnet-216814/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21830-70113/.minikube/machines/kindnet-216814/kindnet-216814.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-kindnet-216814'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1101 11:10:49.637155  108776 main.go:143] libmachine: domain kindnet-216814 has defined MAC address 52:54:00:b3:49:f1 in network default
	I1101 11:10:49.637934  108776 main.go:143] libmachine: domain kindnet-216814 has defined MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:10:49.637951  108776 main.go:143] libmachine: starting domain...
	I1101 11:10:49.637955  108776 main.go:143] libmachine: ensuring networks are active...
	I1101 11:10:49.639166  108776 main.go:143] libmachine: Ensuring network default is active
	I1101 11:10:49.639716  108776 main.go:143] libmachine: Ensuring network mk-kindnet-216814 is active
	I1101 11:10:49.640503  108776 main.go:143] libmachine: getting domain XML...
	I1101 11:10:49.641840  108776 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>kindnet-216814</name>
	  <uuid>bf53b502-1acf-4053-9907-76d4f22d4fb0</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21830-70113/.minikube/machines/kindnet-216814/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21830-70113/.minikube/machines/kindnet-216814/kindnet-216814.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:56:75:ca'/>
	      <source network='mk-kindnet-216814'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:b3:49:f1'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1101 11:10:51.079963  108776 main.go:143] libmachine: waiting for domain to start...
	I1101 11:10:51.082179  108776 main.go:143] libmachine: domain is now running
	I1101 11:10:51.082204  108776 main.go:143] libmachine: waiting for IP...
	I1101 11:10:51.083252  108776 main.go:143] libmachine: domain kindnet-216814 has defined MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:10:51.084062  108776 main.go:143] libmachine: no network interface addresses found for domain kindnet-216814 (source=lease)
	I1101 11:10:51.084082  108776 main.go:143] libmachine: trying to list again with source=arp
	I1101 11:10:51.084716  108776 main.go:143] libmachine: unable to find current IP address of domain kindnet-216814 in network mk-kindnet-216814 (interfaces detected: [])
	I1101 11:10:51.084786  108776 retry.go:31] will retry after 307.449312ms: waiting for domain to come up
	I1101 11:10:51.394756  108776 main.go:143] libmachine: domain kindnet-216814 has defined MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:10:51.395960  108776 main.go:143] libmachine: no network interface addresses found for domain kindnet-216814 (source=lease)
	I1101 11:10:51.396000  108776 main.go:143] libmachine: trying to list again with source=arp
	I1101 11:10:51.396586  108776 main.go:143] libmachine: unable to find current IP address of domain kindnet-216814 in network mk-kindnet-216814 (interfaces detected: [])
	I1101 11:10:51.396635  108776 retry.go:31] will retry after 264.585062ms: waiting for domain to come up
	I1101 11:10:51.663136  108776 main.go:143] libmachine: domain kindnet-216814 has defined MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:10:51.663929  108776 main.go:143] libmachine: no network interface addresses found for domain kindnet-216814 (source=lease)
	I1101 11:10:51.663951  108776 main.go:143] libmachine: trying to list again with source=arp
	I1101 11:10:51.664429  108776 main.go:143] libmachine: unable to find current IP address of domain kindnet-216814 in network mk-kindnet-216814 (interfaces detected: [])
	I1101 11:10:51.664491  108776 retry.go:31] will retry after 487.454053ms: waiting for domain to come up
	I1101 11:10:52.153810  108776 main.go:143] libmachine: domain kindnet-216814 has defined MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:10:52.154717  108776 main.go:143] libmachine: no network interface addresses found for domain kindnet-216814 (source=lease)
	I1101 11:10:52.154740  108776 main.go:143] libmachine: trying to list again with source=arp
	I1101 11:10:52.155299  108776 main.go:143] libmachine: unable to find current IP address of domain kindnet-216814 in network mk-kindnet-216814 (interfaces detected: [])
	I1101 11:10:52.155345  108776 retry.go:31] will retry after 519.149478ms: waiting for domain to come up
	I1101 11:10:48.809247  108549 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 f8e195bbfd8af4cc01974cac90a1813602ea186e4abea2dd378927416f5dc0b5 4f557d4e8f14008c0f3af610b5e7d21f6bc34a9ef9b305c98652539ec8b3a059 a5f4bd825d40113738943da3b6f7b2025cb9516c6519f050d0f26455627a6e67 362c37d6f3cbe57d64055d9d200a6b5d819a9ee4dde9b2fc09af53b1741e8b3b b140a0c1d767d4f7ece6853aeb5d9d8f8f58e137400cc9bf3910f496e71c1b79 e6255bd1d028d6e695d0e2603839a8b912279be15f15055ab2fdcac158a767f2 17af9600e453ef15684f26ea76667b808bbd1ca091d1d10572bc54428e7aa950 c2f04ed6767836b908b324e86b268647055e7e90747f6a36ae0bf4e086b7e5a5 ebc5a0c73a4110e676ab0c3f4c380b85618807e05f7a96b71f987beeff81cb68: (10.575117923s)
	I1101 11:10:48.809329  108549 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 11:10:48.854114  108549 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 11:10:48.867871  108549 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Nov  1 11:09 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5638 Nov  1 11:09 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1954 Nov  1 11:09 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5590 Nov  1 11:09 /etc/kubernetes/scheduler.conf
	
	I1101 11:10:48.867946  108549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 11:10:48.881833  108549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 11:10:48.899403  108549 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1101 11:10:48.899482  108549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 11:10:48.918325  108549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 11:10:48.931987  108549 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1101 11:10:48.932051  108549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 11:10:48.945019  108549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 11:10:48.957958  108549 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1101 11:10:48.958017  108549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 11:10:48.972706  108549 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 11:10:48.990484  108549 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:10:49.052042  108549 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:10:49.874427  108549 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:10:50.180062  108549 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:10:50.269345  108549 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:10:50.403042  108549 api_server.go:52] waiting for apiserver process to appear ...
	I1101 11:10:50.403157  108549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:10:50.903250  108549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:10:51.403469  108549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:10:51.468793  108549 api_server.go:72] duration metric: took 1.065769017s to wait for apiserver process to appear ...
	I1101 11:10:51.468829  108549 api_server.go:88] waiting for apiserver healthz status ...
	I1101 11:10:51.468867  108549 api_server.go:253] Checking apiserver healthz at https://192.168.83.133:8443/healthz ...
	I1101 11:10:51.469626  108549 api_server.go:269] stopped: https://192.168.83.133:8443/healthz: Get "https://192.168.83.133:8443/healthz": dial tcp 192.168.83.133:8443: connect: connection refused
	I1101 11:10:51.969291  108549 api_server.go:253] Checking apiserver healthz at https://192.168.83.133:8443/healthz ...
	I1101 11:10:52.706461  108661 crio.go:462] duration metric: took 1.907772004s to copy over tarball
	I1101 11:10:52.706560  108661 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 11:10:54.697099  108661 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.990496258s)
	I1101 11:10:54.697150  108661 crio.go:469] duration metric: took 1.990656337s to extract the tarball
	I1101 11:10:54.697162  108661 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 11:10:54.755508  108661 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 11:10:54.809797  108661 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 11:10:54.809827  108661 cache_images.go:86] Images are preloaded, skipping loading
	I1101 11:10:54.809837  108661 kubeadm.go:935] updating node { 192.168.39.236 8443 v1.34.1 crio true true} ...
	I1101 11:10:54.809957  108661 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-216814 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.236
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-216814 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 11:10:54.810050  108661 ssh_runner.go:195] Run: crio config
	I1101 11:10:54.867837  108661 cni.go:84] Creating CNI manager for ""
	I1101 11:10:54.867863  108661 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 11:10:54.867883  108661 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 11:10:54.867906  108661 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.236 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-216814 NodeName:auto-216814 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.236"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.236 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 11:10:54.868032  108661 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.236
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-216814"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.236"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.236"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 11:10:54.868100  108661 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 11:10:54.881733  108661 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 11:10:54.881820  108661 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 11:10:54.899081  108661 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1101 11:10:54.928184  108661 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 11:10:54.953547  108661 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1101 11:10:54.978376  108661 ssh_runner.go:195] Run: grep 192.168.39.236	control-plane.minikube.internal$ /etc/hosts
	I1101 11:10:54.982891  108661 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.236	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 11:10:55.003844  108661 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:10:55.169051  108661 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:10:55.211965  108661 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814 for IP: 192.168.39.236
	I1101 11:10:55.211996  108661 certs.go:195] generating shared ca certs ...
	I1101 11:10:55.212017  108661 certs.go:227] acquiring lock for ca certs: {Name:mk20731b316fbc22c351241cefc40924880eeba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:10:55.212246  108661 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-70113/.minikube/ca.key
	I1101 11:10:55.212316  108661 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.key
	I1101 11:10:55.212331  108661 certs.go:257] generating profile certs ...
	I1101 11:10:55.212409  108661 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/client.key
	I1101 11:10:55.212427  108661 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/client.crt with IP's: []
	I1101 11:10:55.442098  108661 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/client.crt ...
	I1101 11:10:55.442149  108661 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/client.crt: {Name:mk1d4b75890cec9adcc5b06d3f96aff1213acbea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:10:55.442355  108661 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/client.key ...
	I1101 11:10:55.442372  108661 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/client.key: {Name:mk1b07a8c6fe28f5b7485a2ae6b2d9f6e6454f98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:10:55.442497  108661 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/apiserver.key.8a5bb5eb
	I1101 11:10:55.442516  108661 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/apiserver.crt.8a5bb5eb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.236]
	I1101 11:10:55.555429  108661 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/apiserver.crt.8a5bb5eb ...
	I1101 11:10:55.555459  108661 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/apiserver.crt.8a5bb5eb: {Name:mk159fe587f63b7fc52d3cf379601116578d91a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:10:55.555636  108661 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/apiserver.key.8a5bb5eb ...
	I1101 11:10:55.555650  108661 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/apiserver.key.8a5bb5eb: {Name:mk6d3bd063c441aee9b2c9299f2a8eb783f60102 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:10:55.555734  108661 certs.go:382] copying /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/apiserver.crt.8a5bb5eb -> /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/apiserver.crt
	I1101 11:10:55.555816  108661 certs.go:386] copying /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/apiserver.key.8a5bb5eb -> /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/apiserver.key
	I1101 11:10:55.555874  108661 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/proxy-client.key
	I1101 11:10:55.555890  108661 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/proxy-client.crt with IP's: []
	I1101 11:10:55.822211  108661 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/proxy-client.crt ...
	I1101 11:10:55.822242  108661 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/proxy-client.crt: {Name:mke8d885b59ddfee589dfe7c2d3f001d6c2b17f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:10:55.822452  108661 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/proxy-client.key ...
	I1101 11:10:55.822468  108661 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/proxy-client.key: {Name:mk5922ac7e4c4a45c9e90672fdf263b964250c14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:10:55.822683  108661 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/73998.pem (1338 bytes)
	W1101 11:10:55.822721  108661 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-70113/.minikube/certs/73998_empty.pem, impossibly tiny 0 bytes
	I1101 11:10:55.822731  108661 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 11:10:55.822751  108661 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem (1082 bytes)
	I1101 11:10:55.822774  108661 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/cert.pem (1123 bytes)
	I1101 11:10:55.822798  108661 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/key.pem (1675 bytes)
	I1101 11:10:55.822844  108661 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/files/etc/ssl/certs/739982.pem (1708 bytes)
	I1101 11:10:55.823403  108661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 11:10:55.859931  108661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 11:10:55.900009  108661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 11:10:55.952497  108661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 11:10:55.988141  108661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1101 11:10:56.026774  108661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 11:10:56.062680  108661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 11:10:56.098038  108661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 11:10:56.130519  108661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/files/etc/ssl/certs/739982.pem --> /usr/share/ca-certificates/739982.pem (1708 bytes)
	I1101 11:10:56.168341  108661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 11:10:56.203660  108661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/certs/73998.pem --> /usr/share/ca-certificates/73998.pem (1338 bytes)
	I1101 11:10:56.236034  108661 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 11:10:56.262376  108661 ssh_runner.go:195] Run: openssl version
	I1101 11:10:56.272000  108661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/739982.pem && ln -fs /usr/share/ca-certificates/739982.pem /etc/ssl/certs/739982.pem"
	I1101 11:10:56.290893  108661 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/739982.pem
	I1101 11:10:56.298078  108661 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:03 /usr/share/ca-certificates/739982.pem
	I1101 11:10:56.298154  108661 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/739982.pem
	I1101 11:10:56.308123  108661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/739982.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 11:10:56.324009  108661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 11:10:56.339960  108661 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:10:56.348138  108661 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:50 /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:10:56.348217  108661 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:10:56.359381  108661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 11:10:56.374912  108661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/73998.pem && ln -fs /usr/share/ca-certificates/73998.pem /etc/ssl/certs/73998.pem"
	I1101 11:10:56.395015  108661 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/73998.pem
	I1101 11:10:56.402005  108661 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:03 /usr/share/ca-certificates/73998.pem
	I1101 11:10:56.402078  108661 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/73998.pem
	I1101 11:10:56.410053  108661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/73998.pem /etc/ssl/certs/51391683.0"
	I1101 11:10:56.429933  108661 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 11:10:56.435591  108661 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 11:10:56.435669  108661 kubeadm.go:401] StartCluster: {Name:auto-216814 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Clu
sterName:auto-216814 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.236 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:10:56.435771  108661 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 11:10:56.435842  108661 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 11:10:56.487982  108661 cri.go:89] found id: ""
	I1101 11:10:56.488068  108661 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 11:10:56.501860  108661 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 11:10:56.516234  108661 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 11:10:56.529441  108661 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 11:10:56.529467  108661 kubeadm.go:158] found existing configuration files:
	
	I1101 11:10:56.529545  108661 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 11:10:56.543023  108661 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 11:10:56.543103  108661 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 11:10:56.557630  108661 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 11:10:56.571526  108661 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 11:10:56.571630  108661 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 11:10:56.588744  108661 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 11:10:56.606447  108661 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 11:10:56.606557  108661 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 11:10:56.631463  108661 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 11:10:56.654238  108661 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 11:10:56.654308  108661 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 11:10:56.677205  108661 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1101 11:10:56.742562  108661 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 11:10:56.742908  108661 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 11:10:56.856271  108661 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 11:10:56.856427  108661 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 11:10:56.856604  108661 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 11:10:56.867399  108661 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 11:10:56.923620  108661 out.go:252]   - Generating certificates and keys ...
	I1101 11:10:56.923728  108661 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 11:10:56.923835  108661 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 11:10:56.923961  108661 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 11:10:57.248352  108661 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 11:10:52.676230  108776 main.go:143] libmachine: domain kindnet-216814 has defined MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:10:52.677163  108776 main.go:143] libmachine: no network interface addresses found for domain kindnet-216814 (source=lease)
	I1101 11:10:52.677182  108776 main.go:143] libmachine: trying to list again with source=arp
	I1101 11:10:52.677597  108776 main.go:143] libmachine: unable to find current IP address of domain kindnet-216814 in network mk-kindnet-216814 (interfaces detected: [])
	I1101 11:10:52.677639  108776 retry.go:31] will retry after 664.179046ms: waiting for domain to come up
	I1101 11:10:53.344317  108776 main.go:143] libmachine: domain kindnet-216814 has defined MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:10:53.345166  108776 main.go:143] libmachine: no network interface addresses found for domain kindnet-216814 (source=lease)
	I1101 11:10:53.345191  108776 main.go:143] libmachine: trying to list again with source=arp
	I1101 11:10:53.345707  108776 main.go:143] libmachine: unable to find current IP address of domain kindnet-216814 in network mk-kindnet-216814 (interfaces detected: [])
	I1101 11:10:53.345758  108776 retry.go:31] will retry after 837.591891ms: waiting for domain to come up
	I1101 11:10:54.186815  108776 main.go:143] libmachine: domain kindnet-216814 has defined MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:10:54.187695  108776 main.go:143] libmachine: no network interface addresses found for domain kindnet-216814 (source=lease)
	I1101 11:10:54.187771  108776 main.go:143] libmachine: trying to list again with source=arp
	I1101 11:10:54.188263  108776 main.go:143] libmachine: unable to find current IP address of domain kindnet-216814 in network mk-kindnet-216814 (interfaces detected: [])
	I1101 11:10:54.188354  108776 retry.go:31] will retry after 721.993568ms: waiting for domain to come up
	I1101 11:10:54.911886  108776 main.go:143] libmachine: domain kindnet-216814 has defined MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:10:54.912606  108776 main.go:143] libmachine: no network interface addresses found for domain kindnet-216814 (source=lease)
	I1101 11:10:54.912627  108776 main.go:143] libmachine: trying to list again with source=arp
	I1101 11:10:54.913095  108776 main.go:143] libmachine: unable to find current IP address of domain kindnet-216814 in network mk-kindnet-216814 (interfaces detected: [])
	I1101 11:10:54.913141  108776 retry.go:31] will retry after 1.416266433s: waiting for domain to come up
	I1101 11:10:56.332062  108776 main.go:143] libmachine: domain kindnet-216814 has defined MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:10:56.333034  108776 main.go:143] libmachine: no network interface addresses found for domain kindnet-216814 (source=lease)
	I1101 11:10:56.333062  108776 main.go:143] libmachine: trying to list again with source=arp
	I1101 11:10:56.333646  108776 main.go:143] libmachine: unable to find current IP address of domain kindnet-216814 in network mk-kindnet-216814 (interfaces detected: [])
	I1101 11:10:56.333697  108776 retry.go:31] will retry after 1.74901992s: waiting for domain to come up
	I1101 11:10:55.307707  108549 api_server.go:279] https://192.168.83.133:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 11:10:55.307742  108549 api_server.go:103] status: https://192.168.83.133:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 11:10:55.307766  108549 api_server.go:253] Checking apiserver healthz at https://192.168.83.133:8443/healthz ...
	I1101 11:10:55.354278  108549 api_server.go:279] https://192.168.83.133:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 11:10:55.354311  108549 api_server.go:103] status: https://192.168.83.133:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 11:10:55.469687  108549 api_server.go:253] Checking apiserver healthz at https://192.168.83.133:8443/healthz ...
	I1101 11:10:55.475166  108549 api_server.go:279] https://192.168.83.133:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:10:55.475193  108549 api_server.go:103] status: https://192.168.83.133:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:10:55.969938  108549 api_server.go:253] Checking apiserver healthz at https://192.168.83.133:8443/healthz ...
	I1101 11:10:55.978476  108549 api_server.go:279] https://192.168.83.133:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:10:55.978505  108549 api_server.go:103] status: https://192.168.83.133:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:10:56.469063  108549 api_server.go:253] Checking apiserver healthz at https://192.168.83.133:8443/healthz ...
	I1101 11:10:56.474121  108549 api_server.go:279] https://192.168.83.133:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:10:56.474158  108549 api_server.go:103] status: https://192.168.83.133:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:10:56.969865  108549 api_server.go:253] Checking apiserver healthz at https://192.168.83.133:8443/healthz ...
	I1101 11:10:56.975020  108549 api_server.go:279] https://192.168.83.133:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:10:56.975046  108549 api_server.go:103] status: https://192.168.83.133:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:10:57.469852  108549 api_server.go:253] Checking apiserver healthz at https://192.168.83.133:8443/healthz ...
	I1101 11:10:57.477783  108549 api_server.go:279] https://192.168.83.133:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:10:57.477819  108549 api_server.go:103] status: https://192.168.83.133:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:10:57.969327  108549 api_server.go:253] Checking apiserver healthz at https://192.168.83.133:8443/healthz ...
	I1101 11:10:57.976266  108549 api_server.go:279] https://192.168.83.133:8443/healthz returned 200:
	ok
	I1101 11:10:57.989248  108549 api_server.go:141] control plane version: v1.34.1
	I1101 11:10:57.989278  108549 api_server.go:131] duration metric: took 6.520442134s to wait for apiserver health ...
	I1101 11:10:57.989289  108549 cni.go:84] Creating CNI manager for ""
	I1101 11:10:57.989296  108549 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 11:10:57.990876  108549 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 11:10:57.992230  108549 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 11:10:58.006848  108549 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1101 11:10:58.033910  108549 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 11:10:58.039306  108549 system_pods.go:59] 6 kube-system pods found
	I1101 11:10:58.039354  108549 system_pods.go:61] "coredns-66bc5c9577-crbpm" [f25a6b07-34dc-4d43-9b5a-59ca2a8be742] Running
	I1101 11:10:58.039370  108549 system_pods.go:61] "etcd-pause-112657" [ad182588-1ea3-41fd-88a3-7f0337e0f7bf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 11:10:58.039382  108549 system_pods.go:61] "kube-apiserver-pause-112657" [af5176d8-4f34-48a6-9960-e7bc9a604816] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 11:10:58.039395  108549 system_pods.go:61] "kube-controller-manager-pause-112657" [bb6d726e-7590-4f1f-b719-3c995d2f115e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 11:10:58.039403  108549 system_pods.go:61] "kube-proxy-pmht9" [93cedff1-d264-4c71-af06-95e4b53e637e] Running
	I1101 11:10:58.039413  108549 system_pods.go:61] "kube-scheduler-pause-112657" [2e9914ec-859d-4893-b671-18ce0be5fe70] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 11:10:58.039421  108549 system_pods.go:74] duration metric: took 5.478073ms to wait for pod list to return data ...
	I1101 11:10:58.039432  108549 node_conditions.go:102] verifying NodePressure condition ...
	I1101 11:10:58.047406  108549 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 11:10:58.047447  108549 node_conditions.go:123] node cpu capacity is 2
	I1101 11:10:58.047462  108549 node_conditions.go:105] duration metric: took 8.02459ms to run NodePressure ...
	I1101 11:10:58.047523  108549 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:10:58.489292  108549 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1101 11:10:58.493678  108549 kubeadm.go:744] kubelet initialised
	I1101 11:10:58.493710  108549 kubeadm.go:745] duration metric: took 4.388706ms waiting for restarted kubelet to initialise ...
	I1101 11:10:58.493734  108549 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 11:10:58.520476  108549 ops.go:34] apiserver oom_adj: -16
	I1101 11:10:58.520502  108549 kubeadm.go:602] duration metric: took 20.568272955s to restartPrimaryControlPlane
	I1101 11:10:58.520514  108549 kubeadm.go:403] duration metric: took 20.898808507s to StartCluster
	I1101 11:10:58.520553  108549 settings.go:142] acquiring lock: {Name:mk26e3d3b2448df59827bb1be60cde1d117dbc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:10:58.520662  108549 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-70113/kubeconfig
	I1101 11:10:58.521656  108549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/kubeconfig: {Name:mk1fa0677a0758214359bfdd6f326495ee5fd60e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:10:58.521965  108549 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.83.133 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 11:10:58.522104  108549 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 11:10:58.522269  108549 config.go:182] Loaded profile config "pause-112657": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:10:58.524034  108549 out.go:179] * Verifying Kubernetes components...
	I1101 11:10:58.524041  108549 out.go:179] * Enabled addons: 
	I1101 11:10:57.539784  108661 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 11:10:57.617128  108661 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 11:10:57.687158  108661 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 11:10:57.687319  108661 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-216814 localhost] and IPs [192.168.39.236 127.0.0.1 ::1]
	I1101 11:10:57.863702  108661 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 11:10:57.863986  108661 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-216814 localhost] and IPs [192.168.39.236 127.0.0.1 ::1]
	I1101 11:10:57.981186  108661 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 11:10:58.094310  108661 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 11:10:58.365497  108661 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 11:10:58.365610  108661 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 11:10:58.435990  108661 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 11:10:58.825227  108661 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 11:10:59.182103  108661 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 11:10:59.640473  108661 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 11:11:00.222034  108661 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 11:11:00.222150  108661 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 11:11:00.225434  108661 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 11:11:00.227687  108661 out.go:252]   - Booting up control plane ...
	I1101 11:11:00.227826  108661 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 11:11:00.228749  108661 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 11:11:00.229680  108661 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 11:11:00.254086  108661 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 11:11:00.254243  108661 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 11:11:00.264584  108661 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 11:11:00.264755  108661 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 11:11:00.264837  108661 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 11:11:00.442584  108661 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 11:11:00.442794  108661 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 11:11:01.443590  108661 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001971002s
	I1101 11:11:01.448676  108661 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 11:11:01.448792  108661 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.236:8443/livez
	I1101 11:11:01.448917  108661 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 11:11:01.449068  108661 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 11:10:58.084980  108776 main.go:143] libmachine: domain kindnet-216814 has defined MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:10:58.085816  108776 main.go:143] libmachine: no network interface addresses found for domain kindnet-216814 (source=lease)
	I1101 11:10:58.085834  108776 main.go:143] libmachine: trying to list again with source=arp
	I1101 11:10:58.086405  108776 main.go:143] libmachine: unable to find current IP address of domain kindnet-216814 in network mk-kindnet-216814 (interfaces detected: [])
	I1101 11:10:58.086448  108776 retry.go:31] will retry after 1.925879476s: waiting for domain to come up
	I1101 11:11:00.013986  108776 main.go:143] libmachine: domain kindnet-216814 has defined MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:11:00.014853  108776 main.go:143] libmachine: no network interface addresses found for domain kindnet-216814 (source=lease)
	I1101 11:11:00.014876  108776 main.go:143] libmachine: trying to list again with source=arp
	I1101 11:11:00.015349  108776 main.go:143] libmachine: unable to find current IP address of domain kindnet-216814 in network mk-kindnet-216814 (interfaces detected: [])
	I1101 11:11:00.015394  108776 retry.go:31] will retry after 2.062807968s: waiting for domain to come up
	I1101 11:11:02.080195  108776 main.go:143] libmachine: domain kindnet-216814 has defined MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:11:02.081068  108776 main.go:143] libmachine: no network interface addresses found for domain kindnet-216814 (source=lease)
	I1101 11:11:02.081097  108776 main.go:143] libmachine: trying to list again with source=arp
	I1101 11:11:02.081667  108776 main.go:143] libmachine: unable to find current IP address of domain kindnet-216814 in network mk-kindnet-216814 (interfaces detected: [])
	I1101 11:11:02.081717  108776 retry.go:31] will retry after 3.437048574s: waiting for domain to come up
	I1101 11:10:58.525559  108549 addons.go:515] duration metric: took 3.483758ms for enable addons: enabled=[]
	I1101 11:10:58.525592  108549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:10:58.807639  108549 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:10:58.833479  108549 node_ready.go:35] waiting up to 6m0s for node "pause-112657" to be "Ready" ...
	I1101 11:10:58.836956  108549 node_ready.go:49] node "pause-112657" is "Ready"
	I1101 11:10:58.836994  108549 node_ready.go:38] duration metric: took 3.466341ms for node "pause-112657" to be "Ready" ...
	I1101 11:10:58.837009  108549 api_server.go:52] waiting for apiserver process to appear ...
	I1101 11:10:58.837086  108549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:10:58.857370  108549 api_server.go:72] duration metric: took 335.358769ms to wait for apiserver process to appear ...
	I1101 11:10:58.857406  108549 api_server.go:88] waiting for apiserver healthz status ...
	I1101 11:10:58.857430  108549 api_server.go:253] Checking apiserver healthz at https://192.168.83.133:8443/healthz ...
	I1101 11:10:58.864031  108549 api_server.go:279] https://192.168.83.133:8443/healthz returned 200:
	ok
	I1101 11:10:58.865059  108549 api_server.go:141] control plane version: v1.34.1
	I1101 11:10:58.865092  108549 api_server.go:131] duration metric: took 7.670212ms to wait for apiserver health ...
	I1101 11:10:58.865103  108549 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 11:10:58.869207  108549 system_pods.go:59] 6 kube-system pods found
	I1101 11:10:58.869235  108549 system_pods.go:61] "coredns-66bc5c9577-crbpm" [f25a6b07-34dc-4d43-9b5a-59ca2a8be742] Running
	I1101 11:10:58.869247  108549 system_pods.go:61] "etcd-pause-112657" [ad182588-1ea3-41fd-88a3-7f0337e0f7bf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 11:10:58.869256  108549 system_pods.go:61] "kube-apiserver-pause-112657" [af5176d8-4f34-48a6-9960-e7bc9a604816] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 11:10:58.869269  108549 system_pods.go:61] "kube-controller-manager-pause-112657" [bb6d726e-7590-4f1f-b719-3c995d2f115e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 11:10:58.869297  108549 system_pods.go:61] "kube-proxy-pmht9" [93cedff1-d264-4c71-af06-95e4b53e637e] Running
	I1101 11:10:58.869309  108549 system_pods.go:61] "kube-scheduler-pause-112657" [2e9914ec-859d-4893-b671-18ce0be5fe70] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 11:10:58.869318  108549 system_pods.go:74] duration metric: took 4.207545ms to wait for pod list to return data ...
	I1101 11:10:58.869329  108549 default_sa.go:34] waiting for default service account to be created ...
	I1101 11:10:58.871459  108549 default_sa.go:45] found service account: "default"
	I1101 11:10:58.871480  108549 default_sa.go:55] duration metric: took 2.143644ms for default service account to be created ...
	I1101 11:10:58.871489  108549 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 11:10:58.875937  108549 system_pods.go:86] 6 kube-system pods found
	I1101 11:10:58.875966  108549 system_pods.go:89] "coredns-66bc5c9577-crbpm" [f25a6b07-34dc-4d43-9b5a-59ca2a8be742] Running
	I1101 11:10:58.875979  108549 system_pods.go:89] "etcd-pause-112657" [ad182588-1ea3-41fd-88a3-7f0337e0f7bf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 11:10:58.875990  108549 system_pods.go:89] "kube-apiserver-pause-112657" [af5176d8-4f34-48a6-9960-e7bc9a604816] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 11:10:58.876000  108549 system_pods.go:89] "kube-controller-manager-pause-112657" [bb6d726e-7590-4f1f-b719-3c995d2f115e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 11:10:58.876006  108549 system_pods.go:89] "kube-proxy-pmht9" [93cedff1-d264-4c71-af06-95e4b53e637e] Running
	I1101 11:10:58.876017  108549 system_pods.go:89] "kube-scheduler-pause-112657" [2e9914ec-859d-4893-b671-18ce0be5fe70] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 11:10:58.876027  108549 system_pods.go:126] duration metric: took 4.530899ms to wait for k8s-apps to be running ...
	I1101 11:10:58.876039  108549 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 11:10:58.876100  108549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:10:58.894381  108549 system_svc.go:56] duration metric: took 18.330906ms WaitForService to wait for kubelet
	I1101 11:10:58.894414  108549 kubeadm.go:587] duration metric: took 372.409575ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 11:10:58.894436  108549 node_conditions.go:102] verifying NodePressure condition ...
	I1101 11:10:58.897256  108549 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 11:10:58.897278  108549 node_conditions.go:123] node cpu capacity is 2
	I1101 11:10:58.897291  108549 node_conditions.go:105] duration metric: took 2.848766ms to run NodePressure ...
	I1101 11:10:58.897306  108549 start.go:242] waiting for startup goroutines ...
	I1101 11:10:58.897317  108549 start.go:247] waiting for cluster config update ...
	I1101 11:10:58.897331  108549 start.go:256] writing updated cluster config ...
	I1101 11:10:58.897737  108549 ssh_runner.go:195] Run: rm -f paused
	I1101 11:10:58.903892  108549 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 11:10:58.904513  108549 kapi.go:59] client config for pause-112657: &rest.Config{Host:"https://192.168.83.133:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21830-70113/.minikube/profiles/pause-112657/client.crt", KeyFile:"/home/jenkins/minikube-integration/21830-70113/.minikube/profiles/pause-112657/client.key", CAFile:"/home/jenkins/minikube-integration/21830-70113/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]st
ring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 11:10:58.909303  108549 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-crbpm" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:10:58.915943  108549 pod_ready.go:94] pod "coredns-66bc5c9577-crbpm" is "Ready"
	I1101 11:10:58.915969  108549 pod_ready.go:86] duration metric: took 6.642416ms for pod "coredns-66bc5c9577-crbpm" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:10:58.918777  108549 pod_ready.go:83] waiting for pod "etcd-pause-112657" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:10:59.926689  108549 pod_ready.go:94] pod "etcd-pause-112657" is "Ready"
	I1101 11:10:59.926722  108549 pod_ready.go:86] duration metric: took 1.007920051s for pod "etcd-pause-112657" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:10:59.931641  108549 pod_ready.go:83] waiting for pod "kube-apiserver-pause-112657" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:11:01.940052  108549 pod_ready.go:94] pod "kube-apiserver-pause-112657" is "Ready"
	I1101 11:11:01.940085  108549 pod_ready.go:86] duration metric: took 2.008414514s for pod "kube-apiserver-pause-112657" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:11:01.943342  108549 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-112657" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:11:04.426967  108661 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.980516427s
	I1101 11:11:05.870721  108661 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.425775837s
	I1101 11:11:05.522265  108776 main.go:143] libmachine: domain kindnet-216814 has defined MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:11:05.522931  108776 main.go:143] libmachine: no network interface addresses found for domain kindnet-216814 (source=lease)
	I1101 11:11:05.522948  108776 main.go:143] libmachine: trying to list again with source=arp
	I1101 11:11:05.523429  108776 main.go:143] libmachine: unable to find current IP address of domain kindnet-216814 in network mk-kindnet-216814 (interfaces detected: [])
	I1101 11:11:05.523465  108776 retry.go:31] will retry after 4.363124933s: waiting for domain to come up
	I1101 11:11:02.951764  108549 pod_ready.go:94] pod "kube-controller-manager-pause-112657" is "Ready"
	I1101 11:11:02.951799  108549 pod_ready.go:86] duration metric: took 1.008372766s for pod "kube-controller-manager-pause-112657" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:11:02.954380  108549 pod_ready.go:83] waiting for pod "kube-proxy-pmht9" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:11:03.109880  108549 pod_ready.go:94] pod "kube-proxy-pmht9" is "Ready"
	I1101 11:11:03.109917  108549 pod_ready.go:86] duration metric: took 155.510185ms for pod "kube-proxy-pmht9" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:11:03.309471  108549 pod_ready.go:83] waiting for pod "kube-scheduler-pause-112657" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 11:11:05.317170  108549 pod_ready.go:104] pod "kube-scheduler-pause-112657" is not "Ready", error: <nil>
	W1101 11:11:07.816738  108549 pod_ready.go:104] pod "kube-scheduler-pause-112657" is not "Ready", error: <nil>
	I1101 11:11:07.947550  108661 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.503701615s
	I1101 11:11:07.968139  108661 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 11:11:07.982853  108661 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 11:11:08.005305  108661 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 11:11:08.005586  108661 kubeadm.go:319] [mark-control-plane] Marking the node auto-216814 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 11:11:08.018387  108661 kubeadm.go:319] [bootstrap-token] Using token: d6s76y.hcvkg7oo9lwcty05
	I1101 11:11:08.019677  108661 out.go:252]   - Configuring RBAC rules ...
	I1101 11:11:08.019830  108661 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 11:11:08.028340  108661 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 11:11:08.037193  108661 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 11:11:08.043840  108661 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 11:11:08.047602  108661 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 11:11:08.051345  108661 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 11:11:08.355915  108661 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 11:11:08.825271  108661 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 11:11:09.358877  108661 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 11:11:09.360166  108661 kubeadm.go:319] 
	I1101 11:11:09.360279  108661 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 11:11:09.360298  108661 kubeadm.go:319] 
	I1101 11:11:09.360400  108661 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 11:11:09.360410  108661 kubeadm.go:319] 
	I1101 11:11:09.360448  108661 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 11:11:09.360558  108661 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 11:11:09.360743  108661 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 11:11:09.360762  108661 kubeadm.go:319] 
	I1101 11:11:09.360839  108661 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 11:11:09.360849  108661 kubeadm.go:319] 
	I1101 11:11:09.360889  108661 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 11:11:09.360895  108661 kubeadm.go:319] 
	I1101 11:11:09.360978  108661 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 11:11:09.361113  108661 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 11:11:09.361212  108661 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 11:11:09.361222  108661 kubeadm.go:319] 
	I1101 11:11:09.361356  108661 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 11:11:09.361433  108661 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 11:11:09.361446  108661 kubeadm.go:319] 
	I1101 11:11:09.361585  108661 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token d6s76y.hcvkg7oo9lwcty05 \
	I1101 11:11:09.361746  108661 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ad8ee8749587d4da67d76f75358688c9a611301f34b35f940a9e7fa320504c7a \
	I1101 11:11:09.361787  108661 kubeadm.go:319] 	--control-plane 
	I1101 11:11:09.361796  108661 kubeadm.go:319] 
	I1101 11:11:09.361921  108661 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 11:11:09.361930  108661 kubeadm.go:319] 
	I1101 11:11:09.362010  108661 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token d6s76y.hcvkg7oo9lwcty05 \
	I1101 11:11:09.362131  108661 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ad8ee8749587d4da67d76f75358688c9a611301f34b35f940a9e7fa320504c7a 
	I1101 11:11:09.363288  108661 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 11:11:09.363315  108661 cni.go:84] Creating CNI manager for ""
	I1101 11:11:09.363323  108661 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 11:11:09.365030  108661 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 11:11:10.317879  108549 pod_ready.go:94] pod "kube-scheduler-pause-112657" is "Ready"
	I1101 11:11:10.317912  108549 pod_ready.go:86] duration metric: took 7.008410496s for pod "kube-scheduler-pause-112657" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:11:10.317924  108549 pod_ready.go:40] duration metric: took 11.414002138s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 11:11:10.372210  108549 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 11:11:10.373933  108549 out.go:179] * Done! kubectl is now configured to use "pause-112657" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 11:11:11 pause-112657 crio[2544]: time="2025-11-01 11:11:11.075771300Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761995471075698078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8f2406f7-fc63-4f8b-8ce3-74c46322acd0 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 11:11:11 pause-112657 crio[2544]: time="2025-11-01 11:11:11.076461439Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9af1d8f0-3825-47d7-b6ab-9df76648b5f7 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:11:11 pause-112657 crio[2544]: time="2025-11-01 11:11:11.076655137Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9af1d8f0-3825-47d7-b6ab-9df76648b5f7 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:11:11 pause-112657 crio[2544]: time="2025-11-01 11:11:11.076953427Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cd4cd50e19c80294183a84f1d6b4ba1319f1f8e73be2fdbbb0ea63eb0cfa3d1e,PodSandboxId:bfdaa75efcfd40e8e5892fe6d876e50466630d96cf632e6cfb152ae498930807,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761995451043743074,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daeea0fec952be898c7676958c513df5,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\"
:\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4aa50e95ae4ca4ff8424b31d4eed01ebd730345a23823c47c6c7c5d5f53b248,PodSandboxId:96707122e744b68bf2c919ddda53e4fe7bb8e933d99057cab719485e8b9eefff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761995451068389604,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12cf0f834b507b710e971bc13c0c41be,},Annot
ations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9ea1384edf60f795b201ee53bb5bb7090a53d71fa1667e6af09c1fdcfbe0740,PodSandboxId:6fe98b0d10f58aca88761422242b87e65d4fe9185934279757b3f309204ea8c2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761995451047336429,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-112657
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5858f94a269fd1471ef44747e4b5a67,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bc15987bc0dc62809b59d9436ba53710885fb23be0858d321037f784b4985be,PodSandboxId:2f66f988ef08fa7c0104edd97f6f605f5f588f33e0d4b28bfca9f4064122eedd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761995438774878738,Labels:map[string]string{
io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-crbpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f25a6b07-34dc-4d43-9b5a-59ca2a8be742,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdb1f5d3d88e5858fba14591c807783a4a562eb1c18635ab0a6d79b3cfaf2963,PodSandboxId:a90f13f4843643d5808f97b5aa874429995acd58fc58f7dd7554b9cccf033519,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761995438002251701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de6a9ba0be63604a02fcdf568085f944,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cac02c746090b038b03b7e3666ef90276d94f274eb8926c189d832b02e7d27b,PodSandboxId:ad282dbcda533
473f977b16f887db85cb11621f5777cf7ceac5e424f29fc7daa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761995437398010668,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pmht9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93cedff1-d264-4c71-af06-95e4b53e637e,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8e195bbfd8af4cc01974cac90a1813602ea186e4abea2dd378927416f5dc0b5,PodSandboxId:6fe98b0d10f58aca88761422242b87e65d4fe918593427
9757b3f309204ea8c2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761995437537846506,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5858f94a269fd1471ef44747e4b5a67,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f557d4e8f14008c0f3af6
10b5e7d21f6bc34a9ef9b305c98652539ec8b3a059,PodSandboxId:96707122e744b68bf2c919ddda53e4fe7bb8e933d99057cab719485e8b9eefff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761995437321516572,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12cf0f834b507b710e971bc13c0c41be,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationM
essagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5f4bd825d40113738943da3b6f7b2025cb9516c6519f050d0f26455627a6e67,PodSandboxId:bfdaa75efcfd40e8e5892fe6d876e50466630d96cf632e6cfb152ae498930807,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761995437231711940,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daeea0fec952be898c7676958c513df5,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:362c37d6f3cbe57d64055d9d200a6b5d819a9ee4dde9b2fc09af53b1741e8b3b,PodSandboxId:a4ef5d2bb179947a42573697c25401c0212c94b3b46dd36c8c4d666705dcaed2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761995403066370455,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-crbpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f25a6b07-34dc-4d43-9b5a-59ca2a8be742,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns
\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b140a0c1d767d4f7ece6853aeb5d9d8f8f58e137400cc9bf3910f496e71c1b79,PodSandboxId:67b8fa24902cc65de9c1fb88a1b0a1e960ae441974a0c6b3442907e5b9c845e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:17619
95401486043006,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pmht9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93cedff1-d264-4c71-af06-95e4b53e637e,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17af9600e453ef15684f26ea76667b808bbd1ca091d1d10572bc54428e7aa950,PodSandboxId:e78b0a7a8f0bae29bf39ab429ab25b51b7eda6f250360739d999c601048ccbc0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761995388421602581,Labels:map[string]string{
io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de6a9ba0be63604a02fcdf568085f944,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9af1d8f0-3825-47d7-b6ab-9df76648b5f7 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:11:11 pause-112657 crio[2544]: time="2025-11-01 11:11:11.138918629Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d9c06626-a59d-47d1-88d7-c6238d865cc4 name=/runtime.v1.RuntimeService/Version
	Nov 01 11:11:11 pause-112657 crio[2544]: time="2025-11-01 11:11:11.139209708Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d9c06626-a59d-47d1-88d7-c6238d865cc4 name=/runtime.v1.RuntimeService/Version
	Nov 01 11:11:11 pause-112657 crio[2544]: time="2025-11-01 11:11:11.141393044Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d2180504-ccd6-44e6-bf8c-1df29e0a5693 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 11:11:11 pause-112657 crio[2544]: time="2025-11-01 11:11:11.141813185Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761995471141788631,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d2180504-ccd6-44e6-bf8c-1df29e0a5693 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 11:11:11 pause-112657 crio[2544]: time="2025-11-01 11:11:11.142847080Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b05b1163-e1bc-4d5d-94e8-ac766b87db49 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:11:11 pause-112657 crio[2544]: time="2025-11-01 11:11:11.143194094Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b05b1163-e1bc-4d5d-94e8-ac766b87db49 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:11:11 pause-112657 crio[2544]: time="2025-11-01 11:11:11.144230681Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cd4cd50e19c80294183a84f1d6b4ba1319f1f8e73be2fdbbb0ea63eb0cfa3d1e,PodSandboxId:bfdaa75efcfd40e8e5892fe6d876e50466630d96cf632e6cfb152ae498930807,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761995451043743074,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daeea0fec952be898c7676958c513df5,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\"
:\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4aa50e95ae4ca4ff8424b31d4eed01ebd730345a23823c47c6c7c5d5f53b248,PodSandboxId:96707122e744b68bf2c919ddda53e4fe7bb8e933d99057cab719485e8b9eefff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761995451068389604,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12cf0f834b507b710e971bc13c0c41be,},Annot
ations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9ea1384edf60f795b201ee53bb5bb7090a53d71fa1667e6af09c1fdcfbe0740,PodSandboxId:6fe98b0d10f58aca88761422242b87e65d4fe9185934279757b3f309204ea8c2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761995451047336429,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-112657
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5858f94a269fd1471ef44747e4b5a67,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bc15987bc0dc62809b59d9436ba53710885fb23be0858d321037f784b4985be,PodSandboxId:2f66f988ef08fa7c0104edd97f6f605f5f588f33e0d4b28bfca9f4064122eedd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761995438774878738,Labels:map[string]string{
io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-crbpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f25a6b07-34dc-4d43-9b5a-59ca2a8be742,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdb1f5d3d88e5858fba14591c807783a4a562eb1c18635ab0a6d79b3cfaf2963,PodSandboxId:a90f13f4843643d5808f97b5aa874429995acd58fc58f7dd7554b9cccf033519,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761995438002251701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de6a9ba0be63604a02fcdf568085f944,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cac02c746090b038b03b7e3666ef90276d94f274eb8926c189d832b02e7d27b,PodSandboxId:ad282dbcda533
473f977b16f887db85cb11621f5777cf7ceac5e424f29fc7daa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761995437398010668,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pmht9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93cedff1-d264-4c71-af06-95e4b53e637e,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8e195bbfd8af4cc01974cac90a1813602ea186e4abea2dd378927416f5dc0b5,PodSandboxId:6fe98b0d10f58aca88761422242b87e65d4fe918593427
9757b3f309204ea8c2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761995437537846506,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5858f94a269fd1471ef44747e4b5a67,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f557d4e8f14008c0f3af6
10b5e7d21f6bc34a9ef9b305c98652539ec8b3a059,PodSandboxId:96707122e744b68bf2c919ddda53e4fe7bb8e933d99057cab719485e8b9eefff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761995437321516572,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12cf0f834b507b710e971bc13c0c41be,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationM
essagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5f4bd825d40113738943da3b6f7b2025cb9516c6519f050d0f26455627a6e67,PodSandboxId:bfdaa75efcfd40e8e5892fe6d876e50466630d96cf632e6cfb152ae498930807,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761995437231711940,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daeea0fec952be898c7676958c513df5,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:362c37d6f3cbe57d64055d9d200a6b5d819a9ee4dde9b2fc09af53b1741e8b3b,PodSandboxId:a4ef5d2bb179947a42573697c25401c0212c94b3b46dd36c8c4d666705dcaed2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761995403066370455,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-crbpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f25a6b07-34dc-4d43-9b5a-59ca2a8be742,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns
\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b140a0c1d767d4f7ece6853aeb5d9d8f8f58e137400cc9bf3910f496e71c1b79,PodSandboxId:67b8fa24902cc65de9c1fb88a1b0a1e960ae441974a0c6b3442907e5b9c845e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:17619
95401486043006,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pmht9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93cedff1-d264-4c71-af06-95e4b53e637e,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17af9600e453ef15684f26ea76667b808bbd1ca091d1d10572bc54428e7aa950,PodSandboxId:e78b0a7a8f0bae29bf39ab429ab25b51b7eda6f250360739d999c601048ccbc0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761995388421602581,Labels:map[string]string{
io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de6a9ba0be63604a02fcdf568085f944,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b05b1163-e1bc-4d5d-94e8-ac766b87db49 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:11:11 pause-112657 crio[2544]: time="2025-11-01 11:11:11.201161887Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=22e9235b-6d3c-49c7-9bb5-96c8769b9eac name=/runtime.v1.RuntimeService/Version
	Nov 01 11:11:11 pause-112657 crio[2544]: time="2025-11-01 11:11:11.201495305Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=22e9235b-6d3c-49c7-9bb5-96c8769b9eac name=/runtime.v1.RuntimeService/Version
	Nov 01 11:11:11 pause-112657 crio[2544]: time="2025-11-01 11:11:11.203122902Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8bd64c60-1f08-4ee6-9d0b-aca3661607e6 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 11:11:11 pause-112657 crio[2544]: time="2025-11-01 11:11:11.204377119Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761995471204269723,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8bd64c60-1f08-4ee6-9d0b-aca3661607e6 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 11:11:11 pause-112657 crio[2544]: time="2025-11-01 11:11:11.205266277Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5b541500-9924-4233-bc77-f51b1e27e65a name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:11:11 pause-112657 crio[2544]: time="2025-11-01 11:11:11.205367625Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5b541500-9924-4233-bc77-f51b1e27e65a name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:11:11 pause-112657 crio[2544]: time="2025-11-01 11:11:11.205640134Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cd4cd50e19c80294183a84f1d6b4ba1319f1f8e73be2fdbbb0ea63eb0cfa3d1e,PodSandboxId:bfdaa75efcfd40e8e5892fe6d876e50466630d96cf632e6cfb152ae498930807,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761995451043743074,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daeea0fec952be898c7676958c513df5,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\"
:\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4aa50e95ae4ca4ff8424b31d4eed01ebd730345a23823c47c6c7c5d5f53b248,PodSandboxId:96707122e744b68bf2c919ddda53e4fe7bb8e933d99057cab719485e8b9eefff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761995451068389604,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12cf0f834b507b710e971bc13c0c41be,},Annot
ations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9ea1384edf60f795b201ee53bb5bb7090a53d71fa1667e6af09c1fdcfbe0740,PodSandboxId:6fe98b0d10f58aca88761422242b87e65d4fe9185934279757b3f309204ea8c2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761995451047336429,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-112657
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5858f94a269fd1471ef44747e4b5a67,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bc15987bc0dc62809b59d9436ba53710885fb23be0858d321037f784b4985be,PodSandboxId:2f66f988ef08fa7c0104edd97f6f605f5f588f33e0d4b28bfca9f4064122eedd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761995438774878738,Labels:map[string]string{
io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-crbpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f25a6b07-34dc-4d43-9b5a-59ca2a8be742,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdb1f5d3d88e5858fba14591c807783a4a562eb1c18635ab0a6d79b3cfaf2963,PodSandboxId:a90f13f4843643d5808f97b5aa874429995acd58fc58f7dd7554b9cccf033519,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761995438002251701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de6a9ba0be63604a02fcdf568085f944,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cac02c746090b038b03b7e3666ef90276d94f274eb8926c189d832b02e7d27b,PodSandboxId:ad282dbcda533
473f977b16f887db85cb11621f5777cf7ceac5e424f29fc7daa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761995437398010668,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pmht9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93cedff1-d264-4c71-af06-95e4b53e637e,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8e195bbfd8af4cc01974cac90a1813602ea186e4abea2dd378927416f5dc0b5,PodSandboxId:6fe98b0d10f58aca88761422242b87e65d4fe918593427
9757b3f309204ea8c2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761995437537846506,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5858f94a269fd1471ef44747e4b5a67,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f557d4e8f14008c0f3af6
10b5e7d21f6bc34a9ef9b305c98652539ec8b3a059,PodSandboxId:96707122e744b68bf2c919ddda53e4fe7bb8e933d99057cab719485e8b9eefff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761995437321516572,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12cf0f834b507b710e971bc13c0c41be,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationM
essagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5f4bd825d40113738943da3b6f7b2025cb9516c6519f050d0f26455627a6e67,PodSandboxId:bfdaa75efcfd40e8e5892fe6d876e50466630d96cf632e6cfb152ae498930807,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761995437231711940,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daeea0fec952be898c7676958c513df5,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:362c37d6f3cbe57d64055d9d200a6b5d819a9ee4dde9b2fc09af53b1741e8b3b,PodSandboxId:a4ef5d2bb179947a42573697c25401c0212c94b3b46dd36c8c4d666705dcaed2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761995403066370455,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-crbpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f25a6b07-34dc-4d43-9b5a-59ca2a8be742,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns
\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b140a0c1d767d4f7ece6853aeb5d9d8f8f58e137400cc9bf3910f496e71c1b79,PodSandboxId:67b8fa24902cc65de9c1fb88a1b0a1e960ae441974a0c6b3442907e5b9c845e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:17619
95401486043006,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pmht9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93cedff1-d264-4c71-af06-95e4b53e637e,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17af9600e453ef15684f26ea76667b808bbd1ca091d1d10572bc54428e7aa950,PodSandboxId:e78b0a7a8f0bae29bf39ab429ab25b51b7eda6f250360739d999c601048ccbc0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761995388421602581,Labels:map[string]string{
io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de6a9ba0be63604a02fcdf568085f944,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5b541500-9924-4233-bc77-f51b1e27e65a name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:11:11 pause-112657 crio[2544]: time="2025-11-01 11:11:11.259274771Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3e2d140d-c0f5-4a03-aa94-0c8c827ee3bb name=/runtime.v1.RuntimeService/Version
	Nov 01 11:11:11 pause-112657 crio[2544]: time="2025-11-01 11:11:11.259414402Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3e2d140d-c0f5-4a03-aa94-0c8c827ee3bb name=/runtime.v1.RuntimeService/Version
	Nov 01 11:11:11 pause-112657 crio[2544]: time="2025-11-01 11:11:11.260759964Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3cba7069-04d4-4c03-826d-e5dc04e23452 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 11:11:11 pause-112657 crio[2544]: time="2025-11-01 11:11:11.261159574Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761995471261131072,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3cba7069-04d4-4c03-826d-e5dc04e23452 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 11:11:11 pause-112657 crio[2544]: time="2025-11-01 11:11:11.261978813Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9f52f9e2-3ceb-478b-9681-cf403a31491f name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:11:11 pause-112657 crio[2544]: time="2025-11-01 11:11:11.262087510Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9f52f9e2-3ceb-478b-9681-cf403a31491f name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:11:11 pause-112657 crio[2544]: time="2025-11-01 11:11:11.262563021Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cd4cd50e19c80294183a84f1d6b4ba1319f1f8e73be2fdbbb0ea63eb0cfa3d1e,PodSandboxId:bfdaa75efcfd40e8e5892fe6d876e50466630d96cf632e6cfb152ae498930807,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761995451043743074,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daeea0fec952be898c7676958c513df5,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\"
:\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4aa50e95ae4ca4ff8424b31d4eed01ebd730345a23823c47c6c7c5d5f53b248,PodSandboxId:96707122e744b68bf2c919ddda53e4fe7bb8e933d99057cab719485e8b9eefff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761995451068389604,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12cf0f834b507b710e971bc13c0c41be,},Annot
ations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9ea1384edf60f795b201ee53bb5bb7090a53d71fa1667e6af09c1fdcfbe0740,PodSandboxId:6fe98b0d10f58aca88761422242b87e65d4fe9185934279757b3f309204ea8c2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761995451047336429,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-112657
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5858f94a269fd1471ef44747e4b5a67,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bc15987bc0dc62809b59d9436ba53710885fb23be0858d321037f784b4985be,PodSandboxId:2f66f988ef08fa7c0104edd97f6f605f5f588f33e0d4b28bfca9f4064122eedd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761995438774878738,Labels:map[string]string{
io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-crbpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f25a6b07-34dc-4d43-9b5a-59ca2a8be742,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdb1f5d3d88e5858fba14591c807783a4a562eb1c18635ab0a6d79b3cfaf2963,PodSandboxId:a90f13f4843643d5808f97b5aa874429995acd58fc58f7dd7554b9cccf033519,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761995438002251701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de6a9ba0be63604a02fcdf568085f944,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cac02c746090b038b03b7e3666ef90276d94f274eb8926c189d832b02e7d27b,PodSandboxId:ad282dbcda533
473f977b16f887db85cb11621f5777cf7ceac5e424f29fc7daa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761995437398010668,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pmht9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93cedff1-d264-4c71-af06-95e4b53e637e,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8e195bbfd8af4cc01974cac90a1813602ea186e4abea2dd378927416f5dc0b5,PodSandboxId:6fe98b0d10f58aca88761422242b87e65d4fe918593427
9757b3f309204ea8c2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761995437537846506,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5858f94a269fd1471ef44747e4b5a67,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f557d4e8f14008c0f3af6
10b5e7d21f6bc34a9ef9b305c98652539ec8b3a059,PodSandboxId:96707122e744b68bf2c919ddda53e4fe7bb8e933d99057cab719485e8b9eefff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761995437321516572,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12cf0f834b507b710e971bc13c0c41be,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationM
essagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5f4bd825d40113738943da3b6f7b2025cb9516c6519f050d0f26455627a6e67,PodSandboxId:bfdaa75efcfd40e8e5892fe6d876e50466630d96cf632e6cfb152ae498930807,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761995437231711940,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daeea0fec952be898c7676958c513df5,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:362c37d6f3cbe57d64055d9d200a6b5d819a9ee4dde9b2fc09af53b1741e8b3b,PodSandboxId:a4ef5d2bb179947a42573697c25401c0212c94b3b46dd36c8c4d666705dcaed2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761995403066370455,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-crbpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f25a6b07-34dc-4d43-9b5a-59ca2a8be742,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns
\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b140a0c1d767d4f7ece6853aeb5d9d8f8f58e137400cc9bf3910f496e71c1b79,PodSandboxId:67b8fa24902cc65de9c1fb88a1b0a1e960ae441974a0c6b3442907e5b9c845e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:17619
95401486043006,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pmht9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93cedff1-d264-4c71-af06-95e4b53e637e,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17af9600e453ef15684f26ea76667b808bbd1ca091d1d10572bc54428e7aa950,PodSandboxId:e78b0a7a8f0bae29bf39ab429ab25b51b7eda6f250360739d999c601048ccbc0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761995388421602581,Labels:map[string]string{
io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de6a9ba0be63604a02fcdf568085f944,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9f52f9e2-3ceb-478b-9681-cf403a31491f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a4aa50e95ae4c       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   20 seconds ago       Running             kube-scheduler            2                   96707122e744b       kube-scheduler-pause-112657
	f9ea1384edf60       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   20 seconds ago       Running             kube-apiserver            2                   6fe98b0d10f58       kube-apiserver-pause-112657
	cd4cd50e19c80       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   20 seconds ago       Running             kube-controller-manager   2                   bfdaa75efcfd4       kube-controller-manager-pause-112657
	2bc15987bc0dc       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   32 seconds ago       Running             coredns                   1                   2f66f988ef08f       coredns-66bc5c9577-crbpm
	fdb1f5d3d88e5       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   33 seconds ago       Running             etcd                      1                   a90f13f484364       etcd-pause-112657
	f8e195bbfd8af       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   33 seconds ago       Exited              kube-apiserver            1                   6fe98b0d10f58       kube-apiserver-pause-112657
	8cac02c746090       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   33 seconds ago       Running             kube-proxy                1                   ad282dbcda533       kube-proxy-pmht9
	4f557d4e8f140       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   34 seconds ago       Exited              kube-scheduler            1                   96707122e744b       kube-scheduler-pause-112657
	a5f4bd825d401       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   34 seconds ago       Exited              kube-controller-manager   1                   bfdaa75efcfd4       kube-controller-manager-pause-112657
	362c37d6f3cbe       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   About a minute ago   Exited              coredns                   0                   a4ef5d2bb1799       coredns-66bc5c9577-crbpm
	b140a0c1d767d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   About a minute ago   Exited              kube-proxy                0                   67b8fa24902cc       kube-proxy-pmht9
	17af9600e453e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   About a minute ago   Exited              etcd                      0                   e78b0a7a8f0ba       etcd-pause-112657
	
	
	==> coredns [2bc15987bc0dc62809b59d9436ba53710885fb23be0858d321037f784b4985be] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59844 - 12657 "HINFO IN 7001407449660026208.7607439692613121171. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.077808099s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [362c37d6f3cbe57d64055d9d200a6b5d819a9ee4dde9b2fc09af53b1741e8b3b] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54699 - 5061 "HINFO IN 8553294983351850771.5981119637402597002. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.091053634s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-112657
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-112657
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=pause-112657
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T11_09_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 11:09:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-112657
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 11:11:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 11:10:55 +0000   Sat, 01 Nov 2025 11:09:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 11:10:55 +0000   Sat, 01 Nov 2025 11:09:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 11:10:55 +0000   Sat, 01 Nov 2025 11:09:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 11:10:55 +0000   Sat, 01 Nov 2025 11:09:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.133
	  Hostname:    pause-112657
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 ec533cff008b47edbf935af9f3a03b16
	  System UUID:                ec533cff-008b-47ed-bf93-5af9f3a03b16
	  Boot ID:                    96b85bcf-a6ae-472a-891b-66a32f625306
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-crbpm                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     71s
	  kube-system                 etcd-pause-112657                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         76s
	  kube-system                 kube-apiserver-pause-112657             250m (12%)    0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 kube-controller-manager-pause-112657    200m (10%)    0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 kube-proxy-pmht9                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 kube-scheduler-pause-112657             100m (5%)     0 (0%)      0 (0%)           0 (0%)         76s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 69s                kube-proxy       
	  Normal  Starting                 14s                kube-proxy       
	  Normal  NodeHasSufficientPID     84s (x7 over 84s)  kubelet          Node pause-112657 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    84s (x8 over 84s)  kubelet          Node pause-112657 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  84s (x8 over 84s)  kubelet          Node pause-112657 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  84s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 76s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  76s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  76s                kubelet          Node pause-112657 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    76s                kubelet          Node pause-112657 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     76s                kubelet          Node pause-112657 status is now: NodeHasSufficientPID
	  Normal  NodeReady                75s                kubelet          Node pause-112657 status is now: NodeReady
	  Normal  RegisteredNode           72s                node-controller  Node pause-112657 event: Registered Node pause-112657 in Controller
	  Normal  Starting                 21s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21s (x8 over 21s)  kubelet          Node pause-112657 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 21s)  kubelet          Node pause-112657 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 21s)  kubelet          Node pause-112657 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12s                node-controller  Node pause-112657 event: Registered Node pause-112657 in Controller
	
	
	==> dmesg <==
	[Nov 1 11:09] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001328] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.007052] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.192326] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.116118] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.119933] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.098113] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.153447] kauditd_printk_skb: 171 callbacks suppressed
	[Nov 1 11:10] kauditd_printk_skb: 19 callbacks suppressed
	[  +0.162289] kauditd_printk_skb: 189 callbacks suppressed
	[  +7.275848] kauditd_printk_skb: 56 callbacks suppressed
	[  +0.130047] kauditd_printk_skb: 253 callbacks suppressed
	[  +7.363826] kauditd_printk_skb: 63 callbacks suppressed
	[Nov 1 11:11] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [17af9600e453ef15684f26ea76667b808bbd1ca091d1d10572bc54428e7aa950] <==
	{"level":"warn","ts":"2025-11-01T11:09:58.540163Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"147.338753ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T11:09:58.540254Z","caller":"traceutil/trace.go:172","msg":"trace[1889406150] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:0; response_revision:314; }","duration":"147.444199ms","start":"2025-11-01T11:09:58.392800Z","end":"2025-11-01T11:09:58.540244Z","steps":["trace[1889406150] 'agreement among raft nodes before linearized reading'  (duration: 147.239131ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T11:09:58.539449Z","caller":"traceutil/trace.go:172","msg":"trace[1696657071] linearizableReadLoop","detail":"{readStateIndex:321; appliedIndex:321; }","duration":"146.625061ms","start":"2025-11-01T11:09:58.392803Z","end":"2025-11-01T11:09:58.539428Z","steps":["trace[1696657071] 'read index received'  (duration: 146.568319ms)","trace[1696657071] 'applied index is now lower than readState.Index'  (duration: 55.438µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T11:09:58.555752Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.316573ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" limit:1 ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2025-11-01T11:09:58.555810Z","caller":"traceutil/trace.go:172","msg":"trace[1471778077] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:314; }","duration":"110.386016ms","start":"2025-11-01T11:09:58.445410Z","end":"2025-11-01T11:09:58.555796Z","steps":["trace[1471778077] 'agreement among raft nodes before linearized reading'  (duration: 100.218868ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T11:10:05.869973Z","caller":"traceutil/trace.go:172","msg":"trace[1617331651] transaction","detail":"{read_only:false; response_revision:424; number_of_response:1; }","duration":"236.68234ms","start":"2025-11-01T11:10:05.633272Z","end":"2025-11-01T11:10:05.869954Z","steps":["trace[1617331651] 'process raft request'  (duration: 160.95613ms)","trace[1617331651] 'compare'  (duration: 75.472834ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T11:10:06.047802Z","caller":"traceutil/trace.go:172","msg":"trace[226132889] transaction","detail":"{read_only:false; response_revision:425; number_of_response:1; }","duration":"127.853495ms","start":"2025-11-01T11:10:05.919900Z","end":"2025-11-01T11:10:06.047753Z","steps":["trace[226132889] 'process raft request'  (duration: 127.209002ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T11:10:20.879013Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-01T11:10:20.879212Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-112657","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.133:2380"],"advertise-client-urls":["https://192.168.83.133:2379"]}
	{"level":"error","ts":"2025-11-01T11:10:20.888066Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T11:10:20.965891Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T11:10:20.965974Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T11:10:20.965993Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"574f8e030020bbf0","current-leader-member-id":"574f8e030020bbf0"}
	{"level":"info","ts":"2025-11-01T11:10:20.966087Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-01T11:10:20.966096Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-01T11:10:20.966349Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.83.133:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T11:10:20.966458Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.83.133:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T11:10:20.966487Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.83.133:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-01T11:10:20.966649Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T11:10:20.966759Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T11:10:20.966779Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T11:10:20.969274Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.83.133:2380"}
	{"level":"error","ts":"2025-11-01T11:10:20.969621Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.83.133:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T11:10:20.969664Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.83.133:2380"}
	{"level":"info","ts":"2025-11-01T11:10:20.969748Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-112657","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.133:2380"],"advertise-client-urls":["https://192.168.83.133:2379"]}
	
	
	==> etcd [fdb1f5d3d88e5858fba14591c807783a4a562eb1c18635ab0a6d79b3cfaf2963] <==
	{"level":"info","ts":"2025-11-01T11:10:56.612404Z","caller":"traceutil/trace.go:172","msg":"trace[1640049380] transaction","detail":"{read_only:false; response_revision:441; number_of_response:1; }","duration":"236.765545ms","start":"2025-11-01T11:10:56.375627Z","end":"2025-11-01T11:10:56.612392Z","steps":["trace[1640049380] 'process raft request'  (duration: 235.410418ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T11:10:56.612968Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"205.124296ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:179"}
	{"level":"info","ts":"2025-11-01T11:10:56.613242Z","caller":"traceutil/trace.go:172","msg":"trace[513679193] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/coredns; range_end:; response_count:1; response_revision:441; }","duration":"205.553423ms","start":"2025-11-01T11:10:56.407675Z","end":"2025-11-01T11:10:56.613228Z","steps":["trace[513679193] 'agreement among raft nodes before linearized reading'  (duration: 204.330513ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T11:10:56.613745Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"171.219904ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T11:10:56.613863Z","caller":"traceutil/trace.go:172","msg":"trace[1570698760] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:441; }","duration":"171.342115ms","start":"2025-11-01T11:10:56.442511Z","end":"2025-11-01T11:10:56.613854Z","steps":["trace[1570698760] 'agreement among raft nodes before linearized reading'  (duration: 171.102772ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T11:10:56.614788Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"207.062273ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" limit:1 ","response":"range_response_count:1 size:185"}
	{"level":"info","ts":"2025-11-01T11:10:56.614818Z","caller":"traceutil/trace.go:172","msg":"trace[1663190627] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kube-proxy; range_end:; response_count:1; response_revision:441; }","duration":"207.096378ms","start":"2025-11-01T11:10:56.407714Z","end":"2025-11-01T11:10:56.614810Z","steps":["trace[1663190627] 'agreement among raft nodes before linearized reading'  (duration: 206.979438ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T11:10:56.906683Z","caller":"traceutil/trace.go:172","msg":"trace[1955872032] linearizableReadLoop","detail":"{readStateIndex:463; appliedIndex:463; }","duration":"268.835066ms","start":"2025-11-01T11:10:56.637823Z","end":"2025-11-01T11:10:56.906658Z","steps":["trace[1955872032] 'read index received'  (duration: 268.828946ms)","trace[1955872032] 'applied index is now lower than readState.Index'  (duration: 5.022µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T11:10:57.099542Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"461.674814ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:controller:endpointslicemirroring-controller\" limit:1 ","response":"range_response_count:1 size:850"}
	{"level":"info","ts":"2025-11-01T11:10:57.099651Z","caller":"traceutil/trace.go:172","msg":"trace[2068833302] range","detail":"{range_begin:/registry/clusterroles/system:controller:endpointslicemirroring-controller; range_end:; response_count:1; response_revision:441; }","duration":"461.841361ms","start":"2025-11-01T11:10:56.637793Z","end":"2025-11-01T11:10:57.099634Z","steps":["trace[2068833302] 'agreement among raft nodes before linearized reading'  (duration: 268.942825ms)","trace[2068833302] 'range keys from in-memory index tree'  (duration: 192.512829ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T11:10:57.099693Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T11:10:56.637778Z","time spent":"461.901172ms","remote":"127.0.0.1:38908","response type":"/etcdserverpb.KV/Range","request count":0,"request size":78,"response count":1,"response size":873,"request content":"key:\"/registry/clusterroles/system:controller:endpointslicemirroring-controller\" limit:1 "}
	{"level":"warn","ts":"2025-11-01T11:10:57.100188Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"192.882765ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13542493675358751828 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-proxy-pmht9\" mod_revision:403 > success:<request_put:<key:\"/registry/pods/kube-system/kube-proxy-pmht9\" value_size:5042 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-proxy-pmht9\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-01T11:10:57.100259Z","caller":"traceutil/trace.go:172","msg":"trace[1530526721] transaction","detail":"{read_only:false; response_revision:442; number_of_response:1; }","duration":"463.872975ms","start":"2025-11-01T11:10:56.636375Z","end":"2025-11-01T11:10:57.100248Z","steps":["trace[1530526721] 'process raft request'  (duration: 270.398206ms)","trace[1530526721] 'compare'  (duration: 192.45219ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T11:10:57.100382Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T11:10:56.636265Z","time spent":"464.082049ms","remote":"127.0.0.1:38536","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5093,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-proxy-pmht9\" mod_revision:403 > success:<request_put:<key:\"/registry/pods/kube-system/kube-proxy-pmht9\" value_size:5042 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-proxy-pmht9\" > >"}
	{"level":"info","ts":"2025-11-01T11:10:57.378672Z","caller":"traceutil/trace.go:172","msg":"trace[1675861580] linearizableReadLoop","detail":"{readStateIndex:464; appliedIndex:464; }","duration":"259.313066ms","start":"2025-11-01T11:10:57.119339Z","end":"2025-11-01T11:10:57.378652Z","steps":["trace[1675861580] 'read index received'  (duration: 259.308269ms)","trace[1675861580] 'applied index is now lower than readState.Index'  (duration: 4.282µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T11:10:57.635923Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"516.563842ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:controller:job-controller\" limit:1 ","response":"range_response_count:1 size:782"}
	{"level":"info","ts":"2025-11-01T11:10:57.636033Z","caller":"traceutil/trace.go:172","msg":"trace[1407849316] range","detail":"{range_begin:/registry/clusterroles/system:controller:job-controller; range_end:; response_count:1; response_revision:442; }","duration":"516.680219ms","start":"2025-11-01T11:10:57.119336Z","end":"2025-11-01T11:10:57.636016Z","steps":["trace[1407849316] 'agreement among raft nodes before linearized reading'  (duration: 260.14631ms)","trace[1407849316] 'range keys from in-memory index tree'  (duration: 256.334312ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T11:10:57.636067Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T11:10:57.119271Z","time spent":"516.788171ms","remote":"127.0.0.1:38908","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":1,"response size":805,"request content":"key:\"/registry/clusterroles/system:controller:job-controller\" limit:1 "}
	{"level":"warn","ts":"2025-11-01T11:10:57.635948Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"256.355126ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13542493675358751836 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-pause-112657\" mod_revision:435 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-pause-112657\" value_size:7165 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-pause-112657\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-01T11:10:57.636590Z","caller":"traceutil/trace.go:172","msg":"trace[1818149970] transaction","detail":"{read_only:false; response_revision:443; number_of_response:1; }","duration":"518.299723ms","start":"2025-11-01T11:10:57.118272Z","end":"2025-11-01T11:10:57.636572Z","steps":["trace[1818149970] 'process raft request'  (duration: 261.264621ms)","trace[1818149970] 'compare'  (duration: 256.266124ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T11:10:57.636666Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T11:10:57.118261Z","time spent":"518.360228ms","remote":"127.0.0.1:38536","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":7227,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-pause-112657\" mod_revision:435 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-pause-112657\" value_size:7165 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-pause-112657\" > >"}
	{"level":"info","ts":"2025-11-01T11:10:57.636726Z","caller":"traceutil/trace.go:172","msg":"trace[764704833] linearizableReadLoop","detail":"{readStateIndex:465; appliedIndex:464; }","duration":"193.739452ms","start":"2025-11-01T11:10:57.442975Z","end":"2025-11-01T11:10:57.636714Z","steps":["trace[764704833] 'read index received'  (duration: 72.933266ms)","trace[764704833] 'applied index is now lower than readState.Index'  (duration: 120.805311ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T11:10:57.636780Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"193.807452ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T11:10:57.636801Z","caller":"traceutil/trace.go:172","msg":"trace[1123814084] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:443; }","duration":"193.829915ms","start":"2025-11-01T11:10:57.442965Z","end":"2025-11-01T11:10:57.636795Z","steps":["trace[1123814084] 'agreement among raft nodes before linearized reading'  (duration: 193.787452ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T11:10:57.741377Z","caller":"traceutil/trace.go:172","msg":"trace[1415454920] transaction","detail":"{read_only:false; response_revision:444; number_of_response:1; }","duration":"101.677672ms","start":"2025-11-01T11:10:57.639684Z","end":"2025-11-01T11:10:57.741362Z","steps":["trace[1415454920] 'process raft request'  (duration: 96.44196ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:11:11 up 1 min,  0 users,  load average: 1.69, 0.64, 0.23
	Linux pause-112657 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Oct 28 16:58:05 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [f8e195bbfd8af4cc01974cac90a1813602ea186e4abea2dd378927416f5dc0b5] <==
	{"level":"warn","ts":"2025-11-01T11:10:44.109537Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00129a1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":84,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-01T11:10:44.137176Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00129a1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":85,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-01T11:10:44.163734Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00129a1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":86,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-01T11:10:44.190603Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00129a1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":87,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-01T11:10:44.216002Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00129a1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":88,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-01T11:10:44.242385Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00129a1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":89,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-01T11:10:44.268842Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00129a1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":90,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-01T11:10:44.295111Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00129a1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":91,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-01T11:10:44.320435Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00129a1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":92,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-01T11:10:44.346882Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00129a1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":93,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-01T11:10:44.371414Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00129a1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":94,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-01T11:10:44.399784Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00129a1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":95,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-01T11:10:44.425149Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00129a1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":96,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-01T11:10:44.450472Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00129a1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":97,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-01T11:10:44.477483Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00129a1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":98,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-01T11:10:44.501272Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00129a1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":99,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	E1101 11:10:44.501421       1 controller.go:97] Error removing old endpoints from kubernetes service: rpc error: code = Canceled desc = grpc: the client connection is closing
	W1101 11:10:44.588183       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1101 11:10:44.588271       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1101 11:10:45.589062       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1101 11:10:45.589826       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1101 11:10:46.588685       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1101 11:10:46.588944       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1101 11:10:47.588832       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1101 11:10:47.589143       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kube-apiserver [f9ea1384edf60f795b201ee53bb5bb7090a53d71fa1667e6af09c1fdcfbe0740] <==
	I1101 11:10:55.372220       1 policy_source.go:240] refreshing policies
	I1101 11:10:55.372254       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 11:10:55.372389       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 11:10:55.382005       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 11:10:55.400024       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1101 11:10:55.402407       1 aggregator.go:171] initial CRD sync complete...
	I1101 11:10:55.402465       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 11:10:55.402484       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 11:10:55.402500       1 cache.go:39] Caches are synced for autoregister controller
	I1101 11:10:55.430690       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 11:10:55.435064       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 11:10:55.437003       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 11:10:55.437049       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 11:10:55.438706       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 11:10:55.449877       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 11:10:56.243882       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 11:10:56.616543       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1101 11:10:57.953634       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.83.133]
	I1101 11:10:57.955407       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 11:10:57.961748       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 11:10:58.172367       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 11:10:58.380039       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 11:10:58.442753       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 11:10:58.461768       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 11:11:00.394594       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [a5f4bd825d40113738943da3b6f7b2025cb9516c6519f050d0f26455627a6e67] <==
	I1101 11:10:39.853684       1 serving.go:386] Generated self-signed cert in-memory
	I1101 11:10:40.613436       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1101 11:10:40.613488       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 11:10:40.617489       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1101 11:10:40.617609       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1101 11:10:40.618086       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1101 11:10:40.618879       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [cd4cd50e19c80294183a84f1d6b4ba1319f1f8e73be2fdbbb0ea63eb0cfa3d1e] <==
	I1101 11:10:59.758672       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 11:10:59.758787       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 11:10:59.763160       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 11:10:59.763638       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 11:10:59.768635       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 11:10:59.771324       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 11:10:59.771786       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 11:10:59.771797       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 11:10:59.771808       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 11:10:59.773602       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 11:10:59.773718       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 11:10:59.775049       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 11:10:59.778758       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 11:10:59.778866       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 11:10:59.783366       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 11:10:59.789944       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 11:10:59.789993       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 11:10:59.790010       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 11:10:59.798199       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 11:10:59.804256       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1101 11:10:59.813658       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 11:10:59.819200       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 11:10:59.819485       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 11:10:59.819577       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-112657"
	I1101 11:10:59.819644       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	
	
	==> kube-proxy [8cac02c746090b038b03b7e3666ef90276d94f274eb8926c189d832b02e7d27b] <==
	E1101 11:10:50.381399       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-112657&limit=500&resourceVersion=0\": dial tcp 192.168.83.133:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1101 11:10:57.303773       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 11:10:57.303845       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.83.133"]
	E1101 11:10:57.304004       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 11:10:57.352945       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1101 11:10:57.353057       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 11:10:57.353081       1 server_linux.go:132] "Using iptables Proxier"
	I1101 11:10:57.367352       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 11:10:57.367924       1 server.go:527] "Version info" version="v1.34.1"
	I1101 11:10:57.368077       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 11:10:57.376049       1 config.go:200] "Starting service config controller"
	I1101 11:10:57.376086       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 11:10:57.376106       1 config.go:106] "Starting endpoint slice config controller"
	I1101 11:10:57.376111       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 11:10:57.376126       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 11:10:57.376133       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 11:10:57.376625       1 config.go:309] "Starting node config controller"
	I1101 11:10:57.376662       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 11:10:57.376670       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 11:10:57.476780       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 11:10:57.476808       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 11:10:57.476831       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [b140a0c1d767d4f7ece6853aeb5d9d8f8f58e137400cc9bf3910f496e71c1b79] <==
	I1101 11:10:01.891038       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 11:10:01.991370       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 11:10:01.991407       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.83.133"]
	E1101 11:10:01.991481       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 11:10:02.042143       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1101 11:10:02.042265       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 11:10:02.042436       1 server_linux.go:132] "Using iptables Proxier"
	I1101 11:10:02.061140       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 11:10:02.061880       1 server.go:527] "Version info" version="v1.34.1"
	I1101 11:10:02.062109       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 11:10:02.069940       1 config.go:200] "Starting service config controller"
	I1101 11:10:02.069952       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 11:10:02.069970       1 config.go:106] "Starting endpoint slice config controller"
	I1101 11:10:02.069973       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 11:10:02.069984       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 11:10:02.069988       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 11:10:02.073780       1 config.go:309] "Starting node config controller"
	I1101 11:10:02.073966       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 11:10:02.170872       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 11:10:02.170907       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 11:10:02.170957       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 11:10:02.175366       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [4f557d4e8f14008c0f3af610b5e7d21f6bc34a9ef9b305c98652539ec8b3a059] <==
	E1101 11:10:43.412977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.83.133:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.83.133:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 11:10:44.537003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.83.133:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.83.133:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 11:10:44.582046       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.83.133:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.83.133:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 11:10:44.785423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.83.133:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.83.133:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 11:10:44.835125       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.83.133:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.83.133:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 11:10:45.010711       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.83.133:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.83.133:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 11:10:45.382392       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.83.133:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.83.133:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 11:10:45.461215       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.83.133:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.83.133:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 11:10:45.512442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.83.133:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.83.133:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 11:10:45.520227       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.83.133:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.83.133:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 11:10:45.598746       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.83.133:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.83.133:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 11:10:45.618051       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.83.133:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.83.133:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 11:10:45.636982       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.83.133:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.83.133:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 11:10:45.773890       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.83.133:8443/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.83.133:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 11:10:45.798774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.83.133:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.83.133:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 11:10:45.837105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.83.133:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.83.133:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 11:10:46.003678       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.83.133:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.83.133:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 11:10:46.148137       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.83.133:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.83.133:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 11:10:46.500604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.83.133:8443/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.83.133:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 11:10:48.469482       1 server.go:286] "handlers are not fully synchronized" err="context canceled"
	I1101 11:10:48.469970       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1101 11:10:48.470024       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1101 11:10:48.470089       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 11:10:48.470186       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1101 11:10:48.470205       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a4aa50e95ae4ca4ff8424b31d4eed01ebd730345a23823c47c6c7c5d5f53b248] <==
	I1101 11:10:53.901490       1 serving.go:386] Generated self-signed cert in-memory
	I1101 11:10:55.417681       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 11:10:55.417843       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 11:10:55.423396       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 11:10:55.423439       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 11:10:55.423496       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 11:10:55.423523       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 11:10:55.423537       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 11:10:55.423542       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 11:10:55.423864       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 11:10:55.423942       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 11:10:55.524731       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 11:10:55.524736       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 11:10:55.524804       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 11:10:52 pause-112657 kubelet[3528]: E1101 11:10:52.555157    3528 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-112657\" not found" node="pause-112657"
	Nov 01 11:10:52 pause-112657 kubelet[3528]: E1101 11:10:52.557865    3528 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-112657\" not found" node="pause-112657"
	Nov 01 11:10:53 pause-112657 kubelet[3528]: E1101 11:10:53.560973    3528 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-112657\" not found" node="pause-112657"
	Nov 01 11:10:53 pause-112657 kubelet[3528]: E1101 11:10:53.562415    3528 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-112657\" not found" node="pause-112657"
	Nov 01 11:10:53 pause-112657 kubelet[3528]: E1101 11:10:53.562934    3528 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-112657\" not found" node="pause-112657"
	Nov 01 11:10:55 pause-112657 kubelet[3528]: I1101 11:10:55.432389    3528 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-112657"
	Nov 01 11:10:55 pause-112657 kubelet[3528]: E1101 11:10:55.456426    3528 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-112657\" already exists" pod="kube-system/kube-apiserver-pause-112657"
	Nov 01 11:10:55 pause-112657 kubelet[3528]: I1101 11:10:55.456461    3528 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-112657"
	Nov 01 11:10:55 pause-112657 kubelet[3528]: E1101 11:10:55.475076    3528 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-112657\" already exists" pod="kube-system/kube-controller-manager-pause-112657"
	Nov 01 11:10:55 pause-112657 kubelet[3528]: I1101 11:10:55.475212    3528 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-112657"
	Nov 01 11:10:55 pause-112657 kubelet[3528]: I1101 11:10:55.487624    3528 kubelet_node_status.go:124] "Node was previously registered" node="pause-112657"
	Nov 01 11:10:55 pause-112657 kubelet[3528]: I1101 11:10:55.487814    3528 kubelet_node_status.go:78] "Successfully registered node" node="pause-112657"
	Nov 01 11:10:55 pause-112657 kubelet[3528]: I1101 11:10:55.487871    3528 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 01 11:10:55 pause-112657 kubelet[3528]: I1101 11:10:55.490573    3528 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 01 11:10:55 pause-112657 kubelet[3528]: E1101 11:10:55.492976    3528 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-112657\" already exists" pod="kube-system/kube-scheduler-pause-112657"
	Nov 01 11:10:55 pause-112657 kubelet[3528]: I1101 11:10:55.493000    3528 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-112657"
	Nov 01 11:10:55 pause-112657 kubelet[3528]: E1101 11:10:55.506981    3528 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-112657\" already exists" pod="kube-system/etcd-pause-112657"
	Nov 01 11:10:56 pause-112657 kubelet[3528]: I1101 11:10:56.298792    3528 apiserver.go:52] "Watching apiserver"
	Nov 01 11:10:56 pause-112657 kubelet[3528]: I1101 11:10:56.332617    3528 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 01 11:10:56 pause-112657 kubelet[3528]: I1101 11:10:56.402690    3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93cedff1-d264-4c71-af06-95e4b53e637e-xtables-lock\") pod \"kube-proxy-pmht9\" (UID: \"93cedff1-d264-4c71-af06-95e4b53e637e\") " pod="kube-system/kube-proxy-pmht9"
	Nov 01 11:10:56 pause-112657 kubelet[3528]: I1101 11:10:56.403733    3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93cedff1-d264-4c71-af06-95e4b53e637e-lib-modules\") pod \"kube-proxy-pmht9\" (UID: \"93cedff1-d264-4c71-af06-95e4b53e637e\") " pod="kube-system/kube-proxy-pmht9"
	Nov 01 11:11:00 pause-112657 kubelet[3528]: E1101 11:11:00.532477    3528 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761995460531637145  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Nov 01 11:11:00 pause-112657 kubelet[3528]: E1101 11:11:00.532540    3528 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761995460531637145  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Nov 01 11:11:10 pause-112657 kubelet[3528]: E1101 11:11:10.535104    3528 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761995470533878361  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Nov 01 11:11:10 pause-112657 kubelet[3528]: E1101 11:11:10.535205    3528 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761995470533878361  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-112657 -n pause-112657
helpers_test.go:269: (dbg) Run:  kubectl --context pause-112657 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-112657 -n pause-112657
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-112657 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-112657 logs -n 25: (2.00306497s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                          ARGS                                                                                                           │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p kubernetes-upgrade-272276                                                                                                                                                                                            │ kubernetes-upgrade-272276 │ jenkins │ v1.37.0 │ 01 Nov 25 11:08 UTC │ 01 Nov 25 11:08 UTC │
	│ delete  │ -p running-upgrade-768085                                                                                                                                                                                               │ running-upgrade-768085    │ jenkins │ v1.37.0 │ 01 Nov 25 11:08 UTC │ 01 Nov 25 11:08 UTC │
	│ start   │ -p kubernetes-upgrade-272276 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                  │ kubernetes-upgrade-272276 │ jenkins │ v1.37.0 │ 01 Nov 25 11:08 UTC │ 01 Nov 25 11:08 UTC │
	│ ssh     │ -p NoKubernetes-028702 sudo systemctl is-active --quiet service kubelet                                                                                                                                                 │ NoKubernetes-028702       │ jenkins │ v1.37.0 │ 01 Nov 25 11:08 UTC │                     │
	│ delete  │ -p NoKubernetes-028702                                                                                                                                                                                                  │ NoKubernetes-028702       │ jenkins │ v1.37.0 │ 01 Nov 25 11:08 UTC │ 01 Nov 25 11:08 UTC │
	│ start   │ -p stopped-upgrade-391167 --memory=3072 --vm-driver=kvm2  --container-runtime=crio                                                                                                                                      │ stopped-upgrade-391167    │ jenkins │ v1.32.0 │ 01 Nov 25 11:08 UTC │ 01 Nov 25 11:09 UTC │
	│ start   │ -p guest-290834 --no-kubernetes --driver=kvm2  --container-runtime=crio                                                                                                                                                 │ guest-290834              │ jenkins │ v1.37.0 │ 01 Nov 25 11:08 UTC │ 01 Nov 25 11:09 UTC │
	│ start   │ -p kubernetes-upgrade-272276 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio                                                                                                         │ kubernetes-upgrade-272276 │ jenkins │ v1.37.0 │ 01 Nov 25 11:08 UTC │                     │
	│ start   │ -p kubernetes-upgrade-272276 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                  │ kubernetes-upgrade-272276 │ jenkins │ v1.37.0 │ 01 Nov 25 11:08 UTC │ 01 Nov 25 11:09 UTC │
	│ start   │ -p pause-112657 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio                                                                                                                 │ pause-112657              │ jenkins │ v1.37.0 │ 01 Nov 25 11:09 UTC │ 01 Nov 25 11:10 UTC │
	│ stop    │ stopped-upgrade-391167 stop                                                                                                                                                                                             │ stopped-upgrade-391167    │ jenkins │ v1.32.0 │ 01 Nov 25 11:09 UTC │ 01 Nov 25 11:09 UTC │
	│ start   │ -p stopped-upgrade-391167 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                  │ stopped-upgrade-391167    │ jenkins │ v1.37.0 │ 01 Nov 25 11:09 UTC │ 01 Nov 25 11:10 UTC │
	│ start   │ -p cert-expiration-917729 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio                                                                                                                 │ cert-expiration-917729    │ jenkins │ v1.37.0 │ 01 Nov 25 11:09 UTC │ 01 Nov 25 11:10 UTC │
	│ delete  │ -p kubernetes-upgrade-272276                                                                                                                                                                                            │ kubernetes-upgrade-272276 │ jenkins │ v1.37.0 │ 01 Nov 25 11:09 UTC │ 01 Nov 25 11:09 UTC │
	│ start   │ -p cert-options-970426 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio │ cert-options-970426       │ jenkins │ v1.37.0 │ 01 Nov 25 11:09 UTC │ 01 Nov 25 11:10 UTC │
	│ start   │ -p pause-112657 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                          │ pause-112657              │ jenkins │ v1.37.0 │ 01 Nov 25 11:10 UTC │ 01 Nov 25 11:11 UTC │
	│ mount   │ /home/jenkins:/minikube-host --profile stopped-upgrade-391167 --v 0 --9p-version 9p2000.L --gid docker --ip  --msize 262144 --port 0 --type 9p --uid docker                                                             │ stopped-upgrade-391167    │ jenkins │ v1.37.0 │ 01 Nov 25 11:10 UTC │                     │
	│ delete  │ -p stopped-upgrade-391167                                                                                                                                                                                               │ stopped-upgrade-391167    │ jenkins │ v1.37.0 │ 01 Nov 25 11:10 UTC │ 01 Nov 25 11:10 UTC │
	│ start   │ -p auto-216814 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio                                                                                                   │ auto-216814               │ jenkins │ v1.37.0 │ 01 Nov 25 11:10 UTC │                     │
	│ delete  │ -p cert-expiration-917729                                                                                                                                                                                               │ cert-expiration-917729    │ jenkins │ v1.37.0 │ 01 Nov 25 11:10 UTC │ 01 Nov 25 11:10 UTC │
	│ start   │ -p kindnet-216814 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio                                                                                  │ kindnet-216814            │ jenkins │ v1.37.0 │ 01 Nov 25 11:10 UTC │                     │
	│ ssh     │ cert-options-970426 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                             │ cert-options-970426       │ jenkins │ v1.37.0 │ 01 Nov 25 11:10 UTC │ 01 Nov 25 11:10 UTC │
	│ ssh     │ -p cert-options-970426 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                           │ cert-options-970426       │ jenkins │ v1.37.0 │ 01 Nov 25 11:10 UTC │ 01 Nov 25 11:10 UTC │
	│ delete  │ -p cert-options-970426                                                                                                                                                                                                  │ cert-options-970426       │ jenkins │ v1.37.0 │ 01 Nov 25 11:10 UTC │ 01 Nov 25 11:10 UTC │
	│ start   │ -p calico-216814 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio                                                                                    │ calico-216814             │ jenkins │ v1.37.0 │ 01 Nov 25 11:10 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 11:10:44
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 11:10:44.045405  109110 out.go:360] Setting OutFile to fd 1 ...
	I1101 11:10:44.045672  109110 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:10:44.045683  109110 out.go:374] Setting ErrFile to fd 2...
	I1101 11:10:44.045687  109110 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:10:44.045903  109110 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-70113/.minikube/bin
	I1101 11:10:44.046412  109110 out.go:368] Setting JSON to false
	I1101 11:10:44.047269  109110 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10392,"bootTime":1761985052,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 11:10:44.047321  109110 start.go:143] virtualization: kvm guest
	I1101 11:10:44.049367  109110 out.go:179] * [calico-216814] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 11:10:44.051361  109110 notify.go:221] Checking for updates...
	I1101 11:10:44.051397  109110 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 11:10:44.053336  109110 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 11:10:44.054757  109110 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-70113/kubeconfig
	I1101 11:10:44.056093  109110 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-70113/.minikube
	I1101 11:10:44.057430  109110 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 11:10:44.058657  109110 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 11:10:44.060525  109110 config.go:182] Loaded profile config "auto-216814": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:10:44.060671  109110 config.go:182] Loaded profile config "guest-290834": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1101 11:10:44.060801  109110 config.go:182] Loaded profile config "kindnet-216814": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:10:44.060977  109110 config.go:182] Loaded profile config "pause-112657": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:10:44.061116  109110 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 11:10:44.098033  109110 out.go:179] * Using the kvm2 driver based on user configuration
	I1101 11:10:44.099290  109110 start.go:309] selected driver: kvm2
	I1101 11:10:44.099325  109110 start.go:930] validating driver "kvm2" against <nil>
	I1101 11:10:44.099343  109110 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 11:10:44.100137  109110 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 11:10:44.100387  109110 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 11:10:44.100426  109110 cni.go:84] Creating CNI manager for "calico"
	I1101 11:10:44.100437  109110 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1101 11:10:44.100488  109110 start.go:353] cluster config:
	{Name:calico-216814 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-216814 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1101 11:10:44.100639  109110 iso.go:125] acquiring lock: {Name:mk49d9a272bb99d336f82dfc5631a4c8ce9271c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 11:10:44.102238  109110 out.go:179] * Starting "calico-216814" primary control-plane node in "calico-216814" cluster
	I1101 11:10:43.189626  108661 main.go:143] libmachine: domain auto-216814 has defined MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:43.190362  108661 main.go:143] libmachine: no network interface addresses found for domain auto-216814 (source=lease)
	I1101 11:10:43.190382  108661 main.go:143] libmachine: trying to list again with source=arp
	I1101 11:10:43.191757  108661 main.go:143] libmachine: unable to find current IP address of domain auto-216814 in network mk-auto-216814 (interfaces detected: [])
	I1101 11:10:43.191804  108661 retry.go:31] will retry after 4.031370035s: waiting for domain to come up
	I1101 11:10:47.228056  108661 main.go:143] libmachine: domain auto-216814 has defined MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:47.228746  108661 main.go:143] libmachine: domain auto-216814 has current primary IP address 192.168.39.236 and MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:47.228765  108661 main.go:143] libmachine: found domain IP: 192.168.39.236
	I1101 11:10:47.228772  108661 main.go:143] libmachine: reserving static IP address...
	I1101 11:10:47.229498  108661 main.go:143] libmachine: unable to find host DHCP lease matching {name: "auto-216814", mac: "52:54:00:37:0c:61", ip: "192.168.39.236"} in network mk-auto-216814
	I1101 11:10:48.878979  108776 start.go:364] duration metric: took 31.424039299s to acquireMachinesLock for "kindnet-216814"
	I1101 11:10:48.879053  108776 start.go:93] Provisioning new machine with config: &{Name:kindnet-216814 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.34.1 ClusterName:kindnet-216814 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 11:10:48.879213  108776 start.go:125] createHost starting for "" (driver="kvm2")
	I1101 11:10:44.103359  109110 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 11:10:44.103397  109110 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-70113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 11:10:44.103405  109110 cache.go:59] Caching tarball of preloaded images
	I1101 11:10:44.103474  109110 preload.go:233] Found /home/jenkins/minikube-integration/21830-70113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 11:10:44.103485  109110 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 11:10:44.103603  109110 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/calico-216814/config.json ...
	I1101 11:10:44.103623  109110 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/calico-216814/config.json: {Name:mkfde681c122cd962ee1bcd79b983564ae0573cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:10:44.103765  109110 start.go:360] acquireMachinesLock for calico-216814: {Name:mk53a05d125fe91ead2a39c6bbf2ba926c471e2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 11:10:47.436681  108661 main.go:143] libmachine: reserved static IP address 192.168.39.236 for domain auto-216814
	I1101 11:10:47.436716  108661 main.go:143] libmachine: waiting for SSH...
	I1101 11:10:47.436725  108661 main.go:143] libmachine: Getting to WaitForSSH function...
	I1101 11:10:47.440423  108661 main.go:143] libmachine: domain auto-216814 has defined MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:47.440898  108661 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:37:0c:61", ip: ""} in network mk-auto-216814: {Iface:virbr1 ExpiryTime:2025-11-01 12:10:44 +0000 UTC Type:0 Mac:52:54:00:37:0c:61 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:minikube Clientid:01:52:54:00:37:0c:61}
	I1101 11:10:47.440924  108661 main.go:143] libmachine: domain auto-216814 has defined IP address 192.168.39.236 and MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:47.441130  108661 main.go:143] libmachine: Using SSH client type: native
	I1101 11:10:47.441414  108661 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.236 22 <nil> <nil>}
	I1101 11:10:47.441428  108661 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1101 11:10:47.550862  108661 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 11:10:47.551260  108661 main.go:143] libmachine: domain creation complete
	I1101 11:10:47.552783  108661 machine.go:94] provisionDockerMachine start ...
	I1101 11:10:47.555369  108661 main.go:143] libmachine: domain auto-216814 has defined MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:47.555748  108661 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:37:0c:61", ip: ""} in network mk-auto-216814: {Iface:virbr1 ExpiryTime:2025-11-01 12:10:44 +0000 UTC Type:0 Mac:52:54:00:37:0c:61 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:auto-216814 Clientid:01:52:54:00:37:0c:61}
	I1101 11:10:47.555772  108661 main.go:143] libmachine: domain auto-216814 has defined IP address 192.168.39.236 and MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:47.555951  108661 main.go:143] libmachine: Using SSH client type: native
	I1101 11:10:47.556140  108661 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.236 22 <nil> <nil>}
	I1101 11:10:47.556150  108661 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 11:10:47.663990  108661 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1101 11:10:47.664023  108661 buildroot.go:166] provisioning hostname "auto-216814"
	I1101 11:10:47.667067  108661 main.go:143] libmachine: domain auto-216814 has defined MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:47.667466  108661 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:37:0c:61", ip: ""} in network mk-auto-216814: {Iface:virbr1 ExpiryTime:2025-11-01 12:10:44 +0000 UTC Type:0 Mac:52:54:00:37:0c:61 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:auto-216814 Clientid:01:52:54:00:37:0c:61}
	I1101 11:10:47.667491  108661 main.go:143] libmachine: domain auto-216814 has defined IP address 192.168.39.236 and MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:47.667682  108661 main.go:143] libmachine: Using SSH client type: native
	I1101 11:10:47.667923  108661 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.236 22 <nil> <nil>}
	I1101 11:10:47.667938  108661 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-216814 && echo "auto-216814" | sudo tee /etc/hostname
	I1101 11:10:47.792003  108661 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-216814
	
	I1101 11:10:47.795497  108661 main.go:143] libmachine: domain auto-216814 has defined MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:47.795942  108661 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:37:0c:61", ip: ""} in network mk-auto-216814: {Iface:virbr1 ExpiryTime:2025-11-01 12:10:44 +0000 UTC Type:0 Mac:52:54:00:37:0c:61 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:auto-216814 Clientid:01:52:54:00:37:0c:61}
	I1101 11:10:47.795976  108661 main.go:143] libmachine: domain auto-216814 has defined IP address 192.168.39.236 and MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:47.796168  108661 main.go:143] libmachine: Using SSH client type: native
	I1101 11:10:47.796438  108661 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.236 22 <nil> <nil>}
	I1101 11:10:47.796464  108661 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-216814' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-216814/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-216814' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 11:10:47.917458  108661 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 11:10:47.917488  108661 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21830-70113/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-70113/.minikube}
	I1101 11:10:47.917518  108661 buildroot.go:174] setting up certificates
	I1101 11:10:47.917553  108661 provision.go:84] configureAuth start
	I1101 11:10:47.920690  108661 main.go:143] libmachine: domain auto-216814 has defined MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:47.921157  108661 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:37:0c:61", ip: ""} in network mk-auto-216814: {Iface:virbr1 ExpiryTime:2025-11-01 12:10:44 +0000 UTC Type:0 Mac:52:54:00:37:0c:61 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:auto-216814 Clientid:01:52:54:00:37:0c:61}
	I1101 11:10:47.921187  108661 main.go:143] libmachine: domain auto-216814 has defined IP address 192.168.39.236 and MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:47.923911  108661 main.go:143] libmachine: domain auto-216814 has defined MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:47.924334  108661 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:37:0c:61", ip: ""} in network mk-auto-216814: {Iface:virbr1 ExpiryTime:2025-11-01 12:10:44 +0000 UTC Type:0 Mac:52:54:00:37:0c:61 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:auto-216814 Clientid:01:52:54:00:37:0c:61}
	I1101 11:10:47.924359  108661 main.go:143] libmachine: domain auto-216814 has defined IP address 192.168.39.236 and MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:47.924524  108661 provision.go:143] copyHostCerts
	I1101 11:10:47.924601  108661 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-70113/.minikube/ca.pem, removing ...
	I1101 11:10:47.924626  108661 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-70113/.minikube/ca.pem
	I1101 11:10:47.924713  108661 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-70113/.minikube/ca.pem (1082 bytes)
	I1101 11:10:47.924862  108661 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-70113/.minikube/cert.pem, removing ...
	I1101 11:10:47.924877  108661 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-70113/.minikube/cert.pem
	I1101 11:10:47.924926  108661 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-70113/.minikube/cert.pem (1123 bytes)
	I1101 11:10:47.925016  108661 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-70113/.minikube/key.pem, removing ...
	I1101 11:10:47.925024  108661 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-70113/.minikube/key.pem
	I1101 11:10:47.925057  108661 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-70113/.minikube/key.pem (1675 bytes)
	I1101 11:10:47.925134  108661 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-70113/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca-key.pem org=jenkins.auto-216814 san=[127.0.0.1 192.168.39.236 auto-216814 localhost minikube]
	I1101 11:10:48.136843  108661 provision.go:177] copyRemoteCerts
	I1101 11:10:48.136920  108661 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 11:10:48.139963  108661 main.go:143] libmachine: domain auto-216814 has defined MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:48.140367  108661 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:37:0c:61", ip: ""} in network mk-auto-216814: {Iface:virbr1 ExpiryTime:2025-11-01 12:10:44 +0000 UTC Type:0 Mac:52:54:00:37:0c:61 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:auto-216814 Clientid:01:52:54:00:37:0c:61}
	I1101 11:10:48.140397  108661 main.go:143] libmachine: domain auto-216814 has defined IP address 192.168.39.236 and MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:48.140580  108661 sshutil.go:53] new ssh client: &{IP:192.168.39.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/auto-216814/id_rsa Username:docker}
	I1101 11:10:48.227500  108661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1101 11:10:48.260736  108661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 11:10:48.295336  108661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 11:10:48.326894  108661 provision.go:87] duration metric: took 409.320663ms to configureAuth
	I1101 11:10:48.326932  108661 buildroot.go:189] setting minikube options for container-runtime
	I1101 11:10:48.327152  108661 config.go:182] Loaded profile config "auto-216814": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:10:48.330414  108661 main.go:143] libmachine: domain auto-216814 has defined MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:48.330812  108661 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:37:0c:61", ip: ""} in network mk-auto-216814: {Iface:virbr1 ExpiryTime:2025-11-01 12:10:44 +0000 UTC Type:0 Mac:52:54:00:37:0c:61 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:auto-216814 Clientid:01:52:54:00:37:0c:61}
	I1101 11:10:48.330848  108661 main.go:143] libmachine: domain auto-216814 has defined IP address 192.168.39.236 and MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:48.331047  108661 main.go:143] libmachine: Using SSH client type: native
	I1101 11:10:48.331253  108661 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.236 22 <nil> <nil>}
	I1101 11:10:48.331268  108661 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 11:10:48.608801  108661 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 11:10:48.608846  108661 machine.go:97] duration metric: took 1.056044143s to provisionDockerMachine
	I1101 11:10:48.608857  108661 client.go:176] duration metric: took 22.006931282s to LocalClient.Create
	I1101 11:10:48.608874  108661 start.go:167] duration metric: took 22.007003584s to libmachine.API.Create "auto-216814"
	I1101 11:10:48.608886  108661 start.go:293] postStartSetup for "auto-216814" (driver="kvm2")
	I1101 11:10:48.608898  108661 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 11:10:48.608982  108661 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 11:10:48.612238  108661 main.go:143] libmachine: domain auto-216814 has defined MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:48.612737  108661 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:37:0c:61", ip: ""} in network mk-auto-216814: {Iface:virbr1 ExpiryTime:2025-11-01 12:10:44 +0000 UTC Type:0 Mac:52:54:00:37:0c:61 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:auto-216814 Clientid:01:52:54:00:37:0c:61}
	I1101 11:10:48.612773  108661 main.go:143] libmachine: domain auto-216814 has defined IP address 192.168.39.236 and MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:48.612941  108661 sshutil.go:53] new ssh client: &{IP:192.168.39.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/auto-216814/id_rsa Username:docker}
	I1101 11:10:48.702368  108661 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 11:10:48.708323  108661 info.go:137] Remote host: Buildroot 2025.02
	I1101 11:10:48.708353  108661 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-70113/.minikube/addons for local assets ...
	I1101 11:10:48.708417  108661 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-70113/.minikube/files for local assets ...
	I1101 11:10:48.708488  108661 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-70113/.minikube/files/etc/ssl/certs/739982.pem -> 739982.pem in /etc/ssl/certs
	I1101 11:10:48.708599  108661 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 11:10:48.721869  108661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/files/etc/ssl/certs/739982.pem --> /etc/ssl/certs/739982.pem (1708 bytes)
	I1101 11:10:48.760855  108661 start.go:296] duration metric: took 151.950822ms for postStartSetup
	I1101 11:10:48.764227  108661 main.go:143] libmachine: domain auto-216814 has defined MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:48.764790  108661 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:37:0c:61", ip: ""} in network mk-auto-216814: {Iface:virbr1 ExpiryTime:2025-11-01 12:10:44 +0000 UTC Type:0 Mac:52:54:00:37:0c:61 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:auto-216814 Clientid:01:52:54:00:37:0c:61}
	I1101 11:10:48.764826  108661 main.go:143] libmachine: domain auto-216814 has defined IP address 192.168.39.236 and MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:48.765109  108661 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/config.json ...
	I1101 11:10:48.765303  108661 start.go:128] duration metric: took 22.181722285s to createHost
	I1101 11:10:48.768246  108661 main.go:143] libmachine: domain auto-216814 has defined MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:48.768675  108661 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:37:0c:61", ip: ""} in network mk-auto-216814: {Iface:virbr1 ExpiryTime:2025-11-01 12:10:44 +0000 UTC Type:0 Mac:52:54:00:37:0c:61 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:auto-216814 Clientid:01:52:54:00:37:0c:61}
	I1101 11:10:48.768705  108661 main.go:143] libmachine: domain auto-216814 has defined IP address 192.168.39.236 and MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:48.768939  108661 main.go:143] libmachine: Using SSH client type: native
	I1101 11:10:48.769172  108661 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.236 22 <nil> <nil>}
	I1101 11:10:48.769184  108661 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1101 11:10:48.878712  108661 main.go:143] libmachine: SSH cmd err, output: <nil>: 1761995448.840573856
	
	I1101 11:10:48.878735  108661 fix.go:216] guest clock: 1761995448.840573856
	I1101 11:10:48.878745  108661 fix.go:229] Guest: 2025-11-01 11:10:48.840573856 +0000 UTC Remote: 2025-11-01 11:10:48.765314896 +0000 UTC m=+36.569731817 (delta=75.25896ms)
	I1101 11:10:48.878765  108661 fix.go:200] guest clock delta is within tolerance: 75.25896ms
	I1101 11:10:48.878771  108661 start.go:83] releasing machines lock for "auto-216814", held for 22.295365601s
	I1101 11:10:48.882551  108661 main.go:143] libmachine: domain auto-216814 has defined MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:48.883219  108661 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:37:0c:61", ip: ""} in network mk-auto-216814: {Iface:virbr1 ExpiryTime:2025-11-01 12:10:44 +0000 UTC Type:0 Mac:52:54:00:37:0c:61 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:auto-216814 Clientid:01:52:54:00:37:0c:61}
	I1101 11:10:48.883257  108661 main.go:143] libmachine: domain auto-216814 has defined IP address 192.168.39.236 and MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:48.883919  108661 ssh_runner.go:195] Run: cat /version.json
	I1101 11:10:48.884043  108661 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 11:10:48.887232  108661 main.go:143] libmachine: domain auto-216814 has defined MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:48.887417  108661 main.go:143] libmachine: domain auto-216814 has defined MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:48.887673  108661 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:37:0c:61", ip: ""} in network mk-auto-216814: {Iface:virbr1 ExpiryTime:2025-11-01 12:10:44 +0000 UTC Type:0 Mac:52:54:00:37:0c:61 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:auto-216814 Clientid:01:52:54:00:37:0c:61}
	I1101 11:10:48.887705  108661 main.go:143] libmachine: domain auto-216814 has defined IP address 192.168.39.236 and MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:48.887854  108661 sshutil.go:53] new ssh client: &{IP:192.168.39.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/auto-216814/id_rsa Username:docker}
	I1101 11:10:48.887880  108661 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:37:0c:61", ip: ""} in network mk-auto-216814: {Iface:virbr1 ExpiryTime:2025-11-01 12:10:44 +0000 UTC Type:0 Mac:52:54:00:37:0c:61 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:auto-216814 Clientid:01:52:54:00:37:0c:61}
	I1101 11:10:48.887921  108661 main.go:143] libmachine: domain auto-216814 has defined IP address 192.168.39.236 and MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:48.888080  108661 sshutil.go:53] new ssh client: &{IP:192.168.39.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/auto-216814/id_rsa Username:docker}
	I1101 11:10:48.976751  108661 ssh_runner.go:195] Run: systemctl --version
	I1101 11:10:49.005631  108661 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 11:10:49.182607  108661 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 11:10:49.192823  108661 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 11:10:49.192917  108661 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 11:10:49.219682  108661 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 11:10:49.219714  108661 start.go:496] detecting cgroup driver to use...
	I1101 11:10:49.219801  108661 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 11:10:49.239349  108661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 11:10:49.257315  108661 docker.go:218] disabling cri-docker service (if available) ...
	I1101 11:10:49.257372  108661 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 11:10:49.276366  108661 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 11:10:49.297994  108661 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 11:10:49.455446  108661 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 11:10:49.673145  108661 docker.go:234] disabling docker service ...
	I1101 11:10:49.673242  108661 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 11:10:49.694708  108661 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 11:10:49.712035  108661 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 11:10:49.872134  108661 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 11:10:50.033720  108661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 11:10:50.056124  108661 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 11:10:50.081761  108661 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 11:10:50.081841  108661 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:10:50.099457  108661 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 11:10:50.099551  108661 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:10:50.113560  108661 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:10:50.127515  108661 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:10:50.142996  108661 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 11:10:50.157665  108661 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:10:50.176378  108661 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:10:50.202945  108661 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:10:50.216889  108661 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 11:10:50.228906  108661 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 11:10:50.228973  108661 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 11:10:50.257141  108661 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 11:10:50.271423  108661 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:10:50.446795  108661 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 11:10:50.581601  108661 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 11:10:50.581691  108661 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 11:10:50.588255  108661 start.go:564] Will wait 60s for crictl version
	I1101 11:10:50.588323  108661 ssh_runner.go:195] Run: which crictl
	I1101 11:10:50.592919  108661 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 11:10:50.640978  108661 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1101 11:10:50.641065  108661 ssh_runner.go:195] Run: crio --version
	I1101 11:10:50.682377  108661 ssh_runner.go:195] Run: crio --version
	I1101 11:10:50.722263  108661 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1101 11:10:50.727023  108661 main.go:143] libmachine: domain auto-216814 has defined MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:50.727689  108661 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:37:0c:61", ip: ""} in network mk-auto-216814: {Iface:virbr1 ExpiryTime:2025-11-01 12:10:44 +0000 UTC Type:0 Mac:52:54:00:37:0c:61 Iaid: IPaddr:192.168.39.236 Prefix:24 Hostname:auto-216814 Clientid:01:52:54:00:37:0c:61}
	I1101 11:10:50.727734  108661 main.go:143] libmachine: domain auto-216814 has defined IP address 192.168.39.236 and MAC address 52:54:00:37:0c:61 in network mk-auto-216814
	I1101 11:10:50.728058  108661 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1101 11:10:50.733289  108661 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 11:10:50.749696  108661 kubeadm.go:884] updating cluster {Name:auto-216814 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:auto-216814 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.236 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 11:10:50.749855  108661 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 11:10:50.749927  108661 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 11:10:50.793414  108661 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1101 11:10:50.793486  108661 ssh_runner.go:195] Run: which lz4
	I1101 11:10:50.798740  108661 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1101 11:10:50.804437  108661 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 11:10:50.804478  108661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1101 11:10:48.881051  108776 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1101 11:10:48.881331  108776 start.go:159] libmachine.API.Create for "kindnet-216814" (driver="kvm2")
	I1101 11:10:48.881382  108776 client.go:173] LocalClient.Create starting
	I1101 11:10:48.881480  108776 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem
	I1101 11:10:48.881555  108776 main.go:143] libmachine: Decoding PEM data...
	I1101 11:10:48.881586  108776 main.go:143] libmachine: Parsing certificate...
	I1101 11:10:48.881710  108776 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21830-70113/.minikube/certs/cert.pem
	I1101 11:10:48.881748  108776 main.go:143] libmachine: Decoding PEM data...
	I1101 11:10:48.881760  108776 main.go:143] libmachine: Parsing certificate...
	I1101 11:10:48.882336  108776 main.go:143] libmachine: creating domain...
	I1101 11:10:48.882352  108776 main.go:143] libmachine: creating network...
	I1101 11:10:48.884321  108776 main.go:143] libmachine: found existing default network
	I1101 11:10:48.884717  108776 main.go:143] libmachine: <network connections='3'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 11:10:48.885818  108776 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:df:ad:39} reservation:<nil>}
	I1101 11:10:48.886435  108776 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:6f:07:43} reservation:<nil>}
	I1101 11:10:48.887503  108776 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ebca40}
	I1101 11:10:48.887631  108776 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-kindnet-216814</name>
	  <dns enable='no'/>
	  <ip address='192.168.61.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.61.2' end='192.168.61.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 11:10:48.896009  108776 main.go:143] libmachine: creating private network mk-kindnet-216814 192.168.61.0/24...
	I1101 11:10:48.983442  108776 main.go:143] libmachine: private network mk-kindnet-216814 192.168.61.0/24 created
	I1101 11:10:48.983822  108776 main.go:143] libmachine: <network>
	  <name>mk-kindnet-216814</name>
	  <uuid>e1a0d679-49d4-4ef6-a2f5-d7355a12eff1</uuid>
	  <bridge name='virbr3' stp='on' delay='0'/>
	  <mac address='52:54:00:4f:f0:de'/>
	  <dns enable='no'/>
	  <ip address='192.168.61.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.61.2' end='192.168.61.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 11:10:48.983866  108776 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21830-70113/.minikube/machines/kindnet-216814 ...
	I1101 11:10:48.983905  108776 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21830-70113/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso
	I1101 11:10:48.983922  108776 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21830-70113/.minikube
	I1101 11:10:48.984006  108776 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21830-70113/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21830-70113/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso...
	I1101 11:10:49.252250  108776 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21830-70113/.minikube/machines/kindnet-216814/id_rsa...
	I1101 11:10:49.629882  108776 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21830-70113/.minikube/machines/kindnet-216814/kindnet-216814.rawdisk...
	I1101 11:10:49.629926  108776 main.go:143] libmachine: Writing magic tar header
	I1101 11:10:49.629944  108776 main.go:143] libmachine: Writing SSH key tar header
	I1101 11:10:49.630018  108776 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21830-70113/.minikube/machines/kindnet-216814 ...
	I1101 11:10:49.630085  108776 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21830-70113/.minikube/machines/kindnet-216814
	I1101 11:10:49.630140  108776 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21830-70113/.minikube/machines/kindnet-216814 (perms=drwx------)
	I1101 11:10:49.630160  108776 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21830-70113/.minikube/machines
	I1101 11:10:49.630171  108776 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21830-70113/.minikube/machines (perms=drwxr-xr-x)
	I1101 11:10:49.630183  108776 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21830-70113/.minikube
	I1101 11:10:49.630192  108776 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21830-70113/.minikube (perms=drwxr-xr-x)
	I1101 11:10:49.630203  108776 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21830-70113
	I1101 11:10:49.630211  108776 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21830-70113 (perms=drwxrwxr-x)
	I1101 11:10:49.630221  108776 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1101 11:10:49.630229  108776 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1101 11:10:49.630236  108776 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1101 11:10:49.630243  108776 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1101 11:10:49.630252  108776 main.go:143] libmachine: checking permissions on dir: /home
	I1101 11:10:49.630272  108776 main.go:143] libmachine: skipping /home - not owner
	I1101 11:10:49.630282  108776 main.go:143] libmachine: defining domain...
	I1101 11:10:49.631766  108776 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>kindnet-216814</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21830-70113/.minikube/machines/kindnet-216814/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21830-70113/.minikube/machines/kindnet-216814/kindnet-216814.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-kindnet-216814'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1101 11:10:49.637155  108776 main.go:143] libmachine: domain kindnet-216814 has defined MAC address 52:54:00:b3:49:f1 in network default
	I1101 11:10:49.637934  108776 main.go:143] libmachine: domain kindnet-216814 has defined MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:10:49.637951  108776 main.go:143] libmachine: starting domain...
	I1101 11:10:49.637955  108776 main.go:143] libmachine: ensuring networks are active...
	I1101 11:10:49.639166  108776 main.go:143] libmachine: Ensuring network default is active
	I1101 11:10:49.639716  108776 main.go:143] libmachine: Ensuring network mk-kindnet-216814 is active
	I1101 11:10:49.640503  108776 main.go:143] libmachine: getting domain XML...
	I1101 11:10:49.641840  108776 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>kindnet-216814</name>
	  <uuid>bf53b502-1acf-4053-9907-76d4f22d4fb0</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21830-70113/.minikube/machines/kindnet-216814/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21830-70113/.minikube/machines/kindnet-216814/kindnet-216814.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:56:75:ca'/>
	      <source network='mk-kindnet-216814'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:b3:49:f1'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1101 11:10:51.079963  108776 main.go:143] libmachine: waiting for domain to start...
	I1101 11:10:51.082179  108776 main.go:143] libmachine: domain is now running
	I1101 11:10:51.082204  108776 main.go:143] libmachine: waiting for IP...
	I1101 11:10:51.083252  108776 main.go:143] libmachine: domain kindnet-216814 has defined MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:10:51.084062  108776 main.go:143] libmachine: no network interface addresses found for domain kindnet-216814 (source=lease)
	I1101 11:10:51.084082  108776 main.go:143] libmachine: trying to list again with source=arp
	I1101 11:10:51.084716  108776 main.go:143] libmachine: unable to find current IP address of domain kindnet-216814 in network mk-kindnet-216814 (interfaces detected: [])
	I1101 11:10:51.084786  108776 retry.go:31] will retry after 307.449312ms: waiting for domain to come up
	I1101 11:10:51.394756  108776 main.go:143] libmachine: domain kindnet-216814 has defined MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:10:51.395960  108776 main.go:143] libmachine: no network interface addresses found for domain kindnet-216814 (source=lease)
	I1101 11:10:51.396000  108776 main.go:143] libmachine: trying to list again with source=arp
	I1101 11:10:51.396586  108776 main.go:143] libmachine: unable to find current IP address of domain kindnet-216814 in network mk-kindnet-216814 (interfaces detected: [])
	I1101 11:10:51.396635  108776 retry.go:31] will retry after 264.585062ms: waiting for domain to come up
	I1101 11:10:51.663136  108776 main.go:143] libmachine: domain kindnet-216814 has defined MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:10:51.663929  108776 main.go:143] libmachine: no network interface addresses found for domain kindnet-216814 (source=lease)
	I1101 11:10:51.663951  108776 main.go:143] libmachine: trying to list again with source=arp
	I1101 11:10:51.664429  108776 main.go:143] libmachine: unable to find current IP address of domain kindnet-216814 in network mk-kindnet-216814 (interfaces detected: [])
	I1101 11:10:51.664491  108776 retry.go:31] will retry after 487.454053ms: waiting for domain to come up
	I1101 11:10:52.153810  108776 main.go:143] libmachine: domain kindnet-216814 has defined MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:10:52.154717  108776 main.go:143] libmachine: no network interface addresses found for domain kindnet-216814 (source=lease)
	I1101 11:10:52.154740  108776 main.go:143] libmachine: trying to list again with source=arp
	I1101 11:10:52.155299  108776 main.go:143] libmachine: unable to find current IP address of domain kindnet-216814 in network mk-kindnet-216814 (interfaces detected: [])
	I1101 11:10:52.155345  108776 retry.go:31] will retry after 519.149478ms: waiting for domain to come up
	I1101 11:10:48.809247  108549 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 f8e195bbfd8af4cc01974cac90a1813602ea186e4abea2dd378927416f5dc0b5 4f557d4e8f14008c0f3af610b5e7d21f6bc34a9ef9b305c98652539ec8b3a059 a5f4bd825d40113738943da3b6f7b2025cb9516c6519f050d0f26455627a6e67 362c37d6f3cbe57d64055d9d200a6b5d819a9ee4dde9b2fc09af53b1741e8b3b b140a0c1d767d4f7ece6853aeb5d9d8f8f58e137400cc9bf3910f496e71c1b79 e6255bd1d028d6e695d0e2603839a8b912279be15f15055ab2fdcac158a767f2 17af9600e453ef15684f26ea76667b808bbd1ca091d1d10572bc54428e7aa950 c2f04ed6767836b908b324e86b268647055e7e90747f6a36ae0bf4e086b7e5a5 ebc5a0c73a4110e676ab0c3f4c380b85618807e05f7a96b71f987beeff81cb68: (10.575117923s)
	I1101 11:10:48.809329  108549 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 11:10:48.854114  108549 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 11:10:48.867871  108549 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5631 Nov  1 11:09 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5638 Nov  1 11:09 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1954 Nov  1 11:09 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5590 Nov  1 11:09 /etc/kubernetes/scheduler.conf
	
	I1101 11:10:48.867946  108549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 11:10:48.881833  108549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 11:10:48.899403  108549 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1101 11:10:48.899482  108549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 11:10:48.918325  108549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 11:10:48.931987  108549 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1101 11:10:48.932051  108549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 11:10:48.945019  108549 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 11:10:48.957958  108549 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1101 11:10:48.958017  108549 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 11:10:48.972706  108549 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 11:10:48.990484  108549 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:10:49.052042  108549 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:10:49.874427  108549 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:10:50.180062  108549 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:10:50.269345  108549 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:10:50.403042  108549 api_server.go:52] waiting for apiserver process to appear ...
	I1101 11:10:50.403157  108549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:10:50.903250  108549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:10:51.403469  108549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:10:51.468793  108549 api_server.go:72] duration metric: took 1.065769017s to wait for apiserver process to appear ...
	I1101 11:10:51.468829  108549 api_server.go:88] waiting for apiserver healthz status ...
	I1101 11:10:51.468867  108549 api_server.go:253] Checking apiserver healthz at https://192.168.83.133:8443/healthz ...
	I1101 11:10:51.469626  108549 api_server.go:269] stopped: https://192.168.83.133:8443/healthz: Get "https://192.168.83.133:8443/healthz": dial tcp 192.168.83.133:8443: connect: connection refused
	I1101 11:10:51.969291  108549 api_server.go:253] Checking apiserver healthz at https://192.168.83.133:8443/healthz ...
	I1101 11:10:52.706461  108661 crio.go:462] duration metric: took 1.907772004s to copy over tarball
	I1101 11:10:52.706560  108661 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 11:10:54.697099  108661 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.990496258s)
	I1101 11:10:54.697150  108661 crio.go:469] duration metric: took 1.990656337s to extract the tarball
	I1101 11:10:54.697162  108661 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 11:10:54.755508  108661 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 11:10:54.809797  108661 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 11:10:54.809827  108661 cache_images.go:86] Images are preloaded, skipping loading
	I1101 11:10:54.809837  108661 kubeadm.go:935] updating node { 192.168.39.236 8443 v1.34.1 crio true true} ...
	I1101 11:10:54.809957  108661 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-216814 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.236
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-216814 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 11:10:54.810050  108661 ssh_runner.go:195] Run: crio config
	I1101 11:10:54.867837  108661 cni.go:84] Creating CNI manager for ""
	I1101 11:10:54.867863  108661 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 11:10:54.867883  108661 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 11:10:54.867906  108661 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.236 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-216814 NodeName:auto-216814 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.236"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.236 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 11:10:54.868032  108661 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.236
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-216814"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.236"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.236"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 11:10:54.868100  108661 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 11:10:54.881733  108661 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 11:10:54.881820  108661 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 11:10:54.899081  108661 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I1101 11:10:54.928184  108661 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 11:10:54.953547  108661 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1101 11:10:54.978376  108661 ssh_runner.go:195] Run: grep 192.168.39.236	control-plane.minikube.internal$ /etc/hosts
	I1101 11:10:54.982891  108661 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.236	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 11:10:55.003844  108661 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:10:55.169051  108661 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:10:55.211965  108661 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814 for IP: 192.168.39.236
	I1101 11:10:55.211996  108661 certs.go:195] generating shared ca certs ...
	I1101 11:10:55.212017  108661 certs.go:227] acquiring lock for ca certs: {Name:mk20731b316fbc22c351241cefc40924880eeba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:10:55.212246  108661 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-70113/.minikube/ca.key
	I1101 11:10:55.212316  108661 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.key
	I1101 11:10:55.212331  108661 certs.go:257] generating profile certs ...
	I1101 11:10:55.212409  108661 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/client.key
	I1101 11:10:55.212427  108661 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/client.crt with IP's: []
	I1101 11:10:55.442098  108661 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/client.crt ...
	I1101 11:10:55.442149  108661 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/client.crt: {Name:mk1d4b75890cec9adcc5b06d3f96aff1213acbea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:10:55.442355  108661 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/client.key ...
	I1101 11:10:55.442372  108661 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/client.key: {Name:mk1b07a8c6fe28f5b7485a2ae6b2d9f6e6454f98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:10:55.442497  108661 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/apiserver.key.8a5bb5eb
	I1101 11:10:55.442516  108661 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/apiserver.crt.8a5bb5eb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.236]
	I1101 11:10:55.555429  108661 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/apiserver.crt.8a5bb5eb ...
	I1101 11:10:55.555459  108661 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/apiserver.crt.8a5bb5eb: {Name:mk159fe587f63b7fc52d3cf379601116578d91a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:10:55.555636  108661 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/apiserver.key.8a5bb5eb ...
	I1101 11:10:55.555650  108661 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/apiserver.key.8a5bb5eb: {Name:mk6d3bd063c441aee9b2c9299f2a8eb783f60102 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:10:55.555734  108661 certs.go:382] copying /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/apiserver.crt.8a5bb5eb -> /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/apiserver.crt
	I1101 11:10:55.555816  108661 certs.go:386] copying /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/apiserver.key.8a5bb5eb -> /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/apiserver.key
	I1101 11:10:55.555874  108661 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/proxy-client.key
	I1101 11:10:55.555890  108661 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/proxy-client.crt with IP's: []
	I1101 11:10:55.822211  108661 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/proxy-client.crt ...
	I1101 11:10:55.822242  108661 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/proxy-client.crt: {Name:mke8d885b59ddfee589dfe7c2d3f001d6c2b17f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:10:55.822452  108661 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/proxy-client.key ...
	I1101 11:10:55.822468  108661 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/proxy-client.key: {Name:mk5922ac7e4c4a45c9e90672fdf263b964250c14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:10:55.822683  108661 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/73998.pem (1338 bytes)
	W1101 11:10:55.822721  108661 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-70113/.minikube/certs/73998_empty.pem, impossibly tiny 0 bytes
	I1101 11:10:55.822731  108661 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 11:10:55.822751  108661 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem (1082 bytes)
	I1101 11:10:55.822774  108661 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/cert.pem (1123 bytes)
	I1101 11:10:55.822798  108661 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/key.pem (1675 bytes)
	I1101 11:10:55.822844  108661 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/files/etc/ssl/certs/739982.pem (1708 bytes)
	I1101 11:10:55.823403  108661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 11:10:55.859931  108661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 11:10:55.900009  108661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 11:10:55.952497  108661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 11:10:55.988141  108661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1101 11:10:56.026774  108661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 11:10:56.062680  108661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 11:10:56.098038  108661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 11:10:56.130519  108661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/files/etc/ssl/certs/739982.pem --> /usr/share/ca-certificates/739982.pem (1708 bytes)
	I1101 11:10:56.168341  108661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 11:10:56.203660  108661 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/certs/73998.pem --> /usr/share/ca-certificates/73998.pem (1338 bytes)
	I1101 11:10:56.236034  108661 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 11:10:56.262376  108661 ssh_runner.go:195] Run: openssl version
	I1101 11:10:56.272000  108661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/739982.pem && ln -fs /usr/share/ca-certificates/739982.pem /etc/ssl/certs/739982.pem"
	I1101 11:10:56.290893  108661 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/739982.pem
	I1101 11:10:56.298078  108661 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:03 /usr/share/ca-certificates/739982.pem
	I1101 11:10:56.298154  108661 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/739982.pem
	I1101 11:10:56.308123  108661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/739982.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 11:10:56.324009  108661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 11:10:56.339960  108661 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:10:56.348138  108661 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:50 /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:10:56.348217  108661 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:10:56.359381  108661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 11:10:56.374912  108661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/73998.pem && ln -fs /usr/share/ca-certificates/73998.pem /etc/ssl/certs/73998.pem"
	I1101 11:10:56.395015  108661 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/73998.pem
	I1101 11:10:56.402005  108661 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:03 /usr/share/ca-certificates/73998.pem
	I1101 11:10:56.402078  108661 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/73998.pem
	I1101 11:10:56.410053  108661 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/73998.pem /etc/ssl/certs/51391683.0"
	I1101 11:10:56.429933  108661 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 11:10:56.435591  108661 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 11:10:56.435669  108661 kubeadm.go:401] StartCluster: {Name:auto-216814 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Clu
sterName:auto-216814 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.236 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:10:56.435771  108661 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 11:10:56.435842  108661 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 11:10:56.487982  108661 cri.go:89] found id: ""
	I1101 11:10:56.488068  108661 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 11:10:56.501860  108661 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 11:10:56.516234  108661 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 11:10:56.529441  108661 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 11:10:56.529467  108661 kubeadm.go:158] found existing configuration files:
	
	I1101 11:10:56.529545  108661 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 11:10:56.543023  108661 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 11:10:56.543103  108661 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 11:10:56.557630  108661 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 11:10:56.571526  108661 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 11:10:56.571630  108661 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 11:10:56.588744  108661 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 11:10:56.606447  108661 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 11:10:56.606557  108661 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 11:10:56.631463  108661 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 11:10:56.654238  108661 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 11:10:56.654308  108661 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 11:10:56.677205  108661 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1101 11:10:56.742562  108661 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 11:10:56.742908  108661 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 11:10:56.856271  108661 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 11:10:56.856427  108661 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 11:10:56.856604  108661 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 11:10:56.867399  108661 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 11:10:56.923620  108661 out.go:252]   - Generating certificates and keys ...
	I1101 11:10:56.923728  108661 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 11:10:56.923835  108661 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 11:10:56.923961  108661 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 11:10:57.248352  108661 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 11:10:52.676230  108776 main.go:143] libmachine: domain kindnet-216814 has defined MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:10:52.677163  108776 main.go:143] libmachine: no network interface addresses found for domain kindnet-216814 (source=lease)
	I1101 11:10:52.677182  108776 main.go:143] libmachine: trying to list again with source=arp
	I1101 11:10:52.677597  108776 main.go:143] libmachine: unable to find current IP address of domain kindnet-216814 in network mk-kindnet-216814 (interfaces detected: [])
	I1101 11:10:52.677639  108776 retry.go:31] will retry after 664.179046ms: waiting for domain to come up
	I1101 11:10:53.344317  108776 main.go:143] libmachine: domain kindnet-216814 has defined MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:10:53.345166  108776 main.go:143] libmachine: no network interface addresses found for domain kindnet-216814 (source=lease)
	I1101 11:10:53.345191  108776 main.go:143] libmachine: trying to list again with source=arp
	I1101 11:10:53.345707  108776 main.go:143] libmachine: unable to find current IP address of domain kindnet-216814 in network mk-kindnet-216814 (interfaces detected: [])
	I1101 11:10:53.345758  108776 retry.go:31] will retry after 837.591891ms: waiting for domain to come up
	I1101 11:10:54.186815  108776 main.go:143] libmachine: domain kindnet-216814 has defined MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:10:54.187695  108776 main.go:143] libmachine: no network interface addresses found for domain kindnet-216814 (source=lease)
	I1101 11:10:54.187771  108776 main.go:143] libmachine: trying to list again with source=arp
	I1101 11:10:54.188263  108776 main.go:143] libmachine: unable to find current IP address of domain kindnet-216814 in network mk-kindnet-216814 (interfaces detected: [])
	I1101 11:10:54.188354  108776 retry.go:31] will retry after 721.993568ms: waiting for domain to come up
	I1101 11:10:54.911886  108776 main.go:143] libmachine: domain kindnet-216814 has defined MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:10:54.912606  108776 main.go:143] libmachine: no network interface addresses found for domain kindnet-216814 (source=lease)
	I1101 11:10:54.912627  108776 main.go:143] libmachine: trying to list again with source=arp
	I1101 11:10:54.913095  108776 main.go:143] libmachine: unable to find current IP address of domain kindnet-216814 in network mk-kindnet-216814 (interfaces detected: [])
	I1101 11:10:54.913141  108776 retry.go:31] will retry after 1.416266433s: waiting for domain to come up
	I1101 11:10:56.332062  108776 main.go:143] libmachine: domain kindnet-216814 has defined MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:10:56.333034  108776 main.go:143] libmachine: no network interface addresses found for domain kindnet-216814 (source=lease)
	I1101 11:10:56.333062  108776 main.go:143] libmachine: trying to list again with source=arp
	I1101 11:10:56.333646  108776 main.go:143] libmachine: unable to find current IP address of domain kindnet-216814 in network mk-kindnet-216814 (interfaces detected: [])
	I1101 11:10:56.333697  108776 retry.go:31] will retry after 1.74901992s: waiting for domain to come up
	I1101 11:10:55.307707  108549 api_server.go:279] https://192.168.83.133:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 11:10:55.307742  108549 api_server.go:103] status: https://192.168.83.133:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 11:10:55.307766  108549 api_server.go:253] Checking apiserver healthz at https://192.168.83.133:8443/healthz ...
	I1101 11:10:55.354278  108549 api_server.go:279] https://192.168.83.133:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 11:10:55.354311  108549 api_server.go:103] status: https://192.168.83.133:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 11:10:55.469687  108549 api_server.go:253] Checking apiserver healthz at https://192.168.83.133:8443/healthz ...
	I1101 11:10:55.475166  108549 api_server.go:279] https://192.168.83.133:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:10:55.475193  108549 api_server.go:103] status: https://192.168.83.133:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:10:55.969938  108549 api_server.go:253] Checking apiserver healthz at https://192.168.83.133:8443/healthz ...
	I1101 11:10:55.978476  108549 api_server.go:279] https://192.168.83.133:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:10:55.978505  108549 api_server.go:103] status: https://192.168.83.133:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:10:56.469063  108549 api_server.go:253] Checking apiserver healthz at https://192.168.83.133:8443/healthz ...
	I1101 11:10:56.474121  108549 api_server.go:279] https://192.168.83.133:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:10:56.474158  108549 api_server.go:103] status: https://192.168.83.133:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:10:56.969865  108549 api_server.go:253] Checking apiserver healthz at https://192.168.83.133:8443/healthz ...
	I1101 11:10:56.975020  108549 api_server.go:279] https://192.168.83.133:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:10:56.975046  108549 api_server.go:103] status: https://192.168.83.133:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:10:57.469852  108549 api_server.go:253] Checking apiserver healthz at https://192.168.83.133:8443/healthz ...
	I1101 11:10:57.477783  108549 api_server.go:279] https://192.168.83.133:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:10:57.477819  108549 api_server.go:103] status: https://192.168.83.133:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:10:57.969327  108549 api_server.go:253] Checking apiserver healthz at https://192.168.83.133:8443/healthz ...
	I1101 11:10:57.976266  108549 api_server.go:279] https://192.168.83.133:8443/healthz returned 200:
	ok
	I1101 11:10:57.989248  108549 api_server.go:141] control plane version: v1.34.1
	I1101 11:10:57.989278  108549 api_server.go:131] duration metric: took 6.520442134s to wait for apiserver health ...
	I1101 11:10:57.989289  108549 cni.go:84] Creating CNI manager for ""
	I1101 11:10:57.989296  108549 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 11:10:57.990876  108549 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 11:10:57.992230  108549 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 11:10:58.006848  108549 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1101 11:10:58.033910  108549 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 11:10:58.039306  108549 system_pods.go:59] 6 kube-system pods found
	I1101 11:10:58.039354  108549 system_pods.go:61] "coredns-66bc5c9577-crbpm" [f25a6b07-34dc-4d43-9b5a-59ca2a8be742] Running
	I1101 11:10:58.039370  108549 system_pods.go:61] "etcd-pause-112657" [ad182588-1ea3-41fd-88a3-7f0337e0f7bf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 11:10:58.039382  108549 system_pods.go:61] "kube-apiserver-pause-112657" [af5176d8-4f34-48a6-9960-e7bc9a604816] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 11:10:58.039395  108549 system_pods.go:61] "kube-controller-manager-pause-112657" [bb6d726e-7590-4f1f-b719-3c995d2f115e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 11:10:58.039403  108549 system_pods.go:61] "kube-proxy-pmht9" [93cedff1-d264-4c71-af06-95e4b53e637e] Running
	I1101 11:10:58.039413  108549 system_pods.go:61] "kube-scheduler-pause-112657" [2e9914ec-859d-4893-b671-18ce0be5fe70] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 11:10:58.039421  108549 system_pods.go:74] duration metric: took 5.478073ms to wait for pod list to return data ...
	I1101 11:10:58.039432  108549 node_conditions.go:102] verifying NodePressure condition ...
	I1101 11:10:58.047406  108549 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 11:10:58.047447  108549 node_conditions.go:123] node cpu capacity is 2
	I1101 11:10:58.047462  108549 node_conditions.go:105] duration metric: took 8.02459ms to run NodePressure ...
	I1101 11:10:58.047523  108549 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:10:58.489292  108549 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1101 11:10:58.493678  108549 kubeadm.go:744] kubelet initialised
	I1101 11:10:58.493710  108549 kubeadm.go:745] duration metric: took 4.388706ms waiting for restarted kubelet to initialise ...
	I1101 11:10:58.493734  108549 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 11:10:58.520476  108549 ops.go:34] apiserver oom_adj: -16
	I1101 11:10:58.520502  108549 kubeadm.go:602] duration metric: took 20.568272955s to restartPrimaryControlPlane
	I1101 11:10:58.520514  108549 kubeadm.go:403] duration metric: took 20.898808507s to StartCluster
	I1101 11:10:58.520553  108549 settings.go:142] acquiring lock: {Name:mk26e3d3b2448df59827bb1be60cde1d117dbc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:10:58.520662  108549 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-70113/kubeconfig
	I1101 11:10:58.521656  108549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/kubeconfig: {Name:mk1fa0677a0758214359bfdd6f326495ee5fd60e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:10:58.521965  108549 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.83.133 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 11:10:58.522104  108549 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 11:10:58.522269  108549 config.go:182] Loaded profile config "pause-112657": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:10:58.524034  108549 out.go:179] * Verifying Kubernetes components...
	I1101 11:10:58.524041  108549 out.go:179] * Enabled addons: 
	I1101 11:10:57.539784  108661 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 11:10:57.617128  108661 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 11:10:57.687158  108661 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 11:10:57.687319  108661 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-216814 localhost] and IPs [192.168.39.236 127.0.0.1 ::1]
	I1101 11:10:57.863702  108661 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 11:10:57.863986  108661 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-216814 localhost] and IPs [192.168.39.236 127.0.0.1 ::1]
	I1101 11:10:57.981186  108661 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 11:10:58.094310  108661 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 11:10:58.365497  108661 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 11:10:58.365610  108661 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 11:10:58.435990  108661 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 11:10:58.825227  108661 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 11:10:59.182103  108661 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 11:10:59.640473  108661 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 11:11:00.222034  108661 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 11:11:00.222150  108661 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 11:11:00.225434  108661 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 11:11:00.227687  108661 out.go:252]   - Booting up control plane ...
	I1101 11:11:00.227826  108661 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 11:11:00.228749  108661 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 11:11:00.229680  108661 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 11:11:00.254086  108661 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 11:11:00.254243  108661 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 11:11:00.264584  108661 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 11:11:00.264755  108661 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 11:11:00.264837  108661 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 11:11:00.442584  108661 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 11:11:00.442794  108661 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 11:11:01.443590  108661 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001971002s
	I1101 11:11:01.448676  108661 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 11:11:01.448792  108661 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.236:8443/livez
	I1101 11:11:01.448917  108661 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 11:11:01.449068  108661 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 11:10:58.084980  108776 main.go:143] libmachine: domain kindnet-216814 has defined MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:10:58.085816  108776 main.go:143] libmachine: no network interface addresses found for domain kindnet-216814 (source=lease)
	I1101 11:10:58.085834  108776 main.go:143] libmachine: trying to list again with source=arp
	I1101 11:10:58.086405  108776 main.go:143] libmachine: unable to find current IP address of domain kindnet-216814 in network mk-kindnet-216814 (interfaces detected: [])
	I1101 11:10:58.086448  108776 retry.go:31] will retry after 1.925879476s: waiting for domain to come up
	I1101 11:11:00.013986  108776 main.go:143] libmachine: domain kindnet-216814 has defined MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:11:00.014853  108776 main.go:143] libmachine: no network interface addresses found for domain kindnet-216814 (source=lease)
	I1101 11:11:00.014876  108776 main.go:143] libmachine: trying to list again with source=arp
	I1101 11:11:00.015349  108776 main.go:143] libmachine: unable to find current IP address of domain kindnet-216814 in network mk-kindnet-216814 (interfaces detected: [])
	I1101 11:11:00.015394  108776 retry.go:31] will retry after 2.062807968s: waiting for domain to come up
	I1101 11:11:02.080195  108776 main.go:143] libmachine: domain kindnet-216814 has defined MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:11:02.081068  108776 main.go:143] libmachine: no network interface addresses found for domain kindnet-216814 (source=lease)
	I1101 11:11:02.081097  108776 main.go:143] libmachine: trying to list again with source=arp
	I1101 11:11:02.081667  108776 main.go:143] libmachine: unable to find current IP address of domain kindnet-216814 in network mk-kindnet-216814 (interfaces detected: [])
	I1101 11:11:02.081717  108776 retry.go:31] will retry after 3.437048574s: waiting for domain to come up
	I1101 11:10:58.525559  108549 addons.go:515] duration metric: took 3.483758ms for enable addons: enabled=[]
	I1101 11:10:58.525592  108549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:10:58.807639  108549 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:10:58.833479  108549 node_ready.go:35] waiting up to 6m0s for node "pause-112657" to be "Ready" ...
	I1101 11:10:58.836956  108549 node_ready.go:49] node "pause-112657" is "Ready"
	I1101 11:10:58.836994  108549 node_ready.go:38] duration metric: took 3.466341ms for node "pause-112657" to be "Ready" ...
	I1101 11:10:58.837009  108549 api_server.go:52] waiting for apiserver process to appear ...
	I1101 11:10:58.837086  108549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:10:58.857370  108549 api_server.go:72] duration metric: took 335.358769ms to wait for apiserver process to appear ...
	I1101 11:10:58.857406  108549 api_server.go:88] waiting for apiserver healthz status ...
	I1101 11:10:58.857430  108549 api_server.go:253] Checking apiserver healthz at https://192.168.83.133:8443/healthz ...
	I1101 11:10:58.864031  108549 api_server.go:279] https://192.168.83.133:8443/healthz returned 200:
	ok
	I1101 11:10:58.865059  108549 api_server.go:141] control plane version: v1.34.1
	I1101 11:10:58.865092  108549 api_server.go:131] duration metric: took 7.670212ms to wait for apiserver health ...
	I1101 11:10:58.865103  108549 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 11:10:58.869207  108549 system_pods.go:59] 6 kube-system pods found
	I1101 11:10:58.869235  108549 system_pods.go:61] "coredns-66bc5c9577-crbpm" [f25a6b07-34dc-4d43-9b5a-59ca2a8be742] Running
	I1101 11:10:58.869247  108549 system_pods.go:61] "etcd-pause-112657" [ad182588-1ea3-41fd-88a3-7f0337e0f7bf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 11:10:58.869256  108549 system_pods.go:61] "kube-apiserver-pause-112657" [af5176d8-4f34-48a6-9960-e7bc9a604816] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 11:10:58.869269  108549 system_pods.go:61] "kube-controller-manager-pause-112657" [bb6d726e-7590-4f1f-b719-3c995d2f115e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 11:10:58.869297  108549 system_pods.go:61] "kube-proxy-pmht9" [93cedff1-d264-4c71-af06-95e4b53e637e] Running
	I1101 11:10:58.869309  108549 system_pods.go:61] "kube-scheduler-pause-112657" [2e9914ec-859d-4893-b671-18ce0be5fe70] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 11:10:58.869318  108549 system_pods.go:74] duration metric: took 4.207545ms to wait for pod list to return data ...
	I1101 11:10:58.869329  108549 default_sa.go:34] waiting for default service account to be created ...
	I1101 11:10:58.871459  108549 default_sa.go:45] found service account: "default"
	I1101 11:10:58.871480  108549 default_sa.go:55] duration metric: took 2.143644ms for default service account to be created ...
	I1101 11:10:58.871489  108549 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 11:10:58.875937  108549 system_pods.go:86] 6 kube-system pods found
	I1101 11:10:58.875966  108549 system_pods.go:89] "coredns-66bc5c9577-crbpm" [f25a6b07-34dc-4d43-9b5a-59ca2a8be742] Running
	I1101 11:10:58.875979  108549 system_pods.go:89] "etcd-pause-112657" [ad182588-1ea3-41fd-88a3-7f0337e0f7bf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 11:10:58.875990  108549 system_pods.go:89] "kube-apiserver-pause-112657" [af5176d8-4f34-48a6-9960-e7bc9a604816] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 11:10:58.876000  108549 system_pods.go:89] "kube-controller-manager-pause-112657" [bb6d726e-7590-4f1f-b719-3c995d2f115e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 11:10:58.876006  108549 system_pods.go:89] "kube-proxy-pmht9" [93cedff1-d264-4c71-af06-95e4b53e637e] Running
	I1101 11:10:58.876017  108549 system_pods.go:89] "kube-scheduler-pause-112657" [2e9914ec-859d-4893-b671-18ce0be5fe70] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 11:10:58.876027  108549 system_pods.go:126] duration metric: took 4.530899ms to wait for k8s-apps to be running ...
	I1101 11:10:58.876039  108549 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 11:10:58.876100  108549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:10:58.894381  108549 system_svc.go:56] duration metric: took 18.330906ms WaitForService to wait for kubelet
	I1101 11:10:58.894414  108549 kubeadm.go:587] duration metric: took 372.409575ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 11:10:58.894436  108549 node_conditions.go:102] verifying NodePressure condition ...
	I1101 11:10:58.897256  108549 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 11:10:58.897278  108549 node_conditions.go:123] node cpu capacity is 2
	I1101 11:10:58.897291  108549 node_conditions.go:105] duration metric: took 2.848766ms to run NodePressure ...
	I1101 11:10:58.897306  108549 start.go:242] waiting for startup goroutines ...
	I1101 11:10:58.897317  108549 start.go:247] waiting for cluster config update ...
	I1101 11:10:58.897331  108549 start.go:256] writing updated cluster config ...
	I1101 11:10:58.897737  108549 ssh_runner.go:195] Run: rm -f paused
	I1101 11:10:58.903892  108549 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 11:10:58.904513  108549 kapi.go:59] client config for pause-112657: &rest.Config{Host:"https://192.168.83.133:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21830-70113/.minikube/profiles/pause-112657/client.crt", KeyFile:"/home/jenkins/minikube-integration/21830-70113/.minikube/profiles/pause-112657/client.key", CAFile:"/home/jenkins/minikube-integration/21830-70113/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]st
ring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 11:10:58.909303  108549 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-crbpm" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:10:58.915943  108549 pod_ready.go:94] pod "coredns-66bc5c9577-crbpm" is "Ready"
	I1101 11:10:58.915969  108549 pod_ready.go:86] duration metric: took 6.642416ms for pod "coredns-66bc5c9577-crbpm" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:10:58.918777  108549 pod_ready.go:83] waiting for pod "etcd-pause-112657" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:10:59.926689  108549 pod_ready.go:94] pod "etcd-pause-112657" is "Ready"
	I1101 11:10:59.926722  108549 pod_ready.go:86] duration metric: took 1.007920051s for pod "etcd-pause-112657" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:10:59.931641  108549 pod_ready.go:83] waiting for pod "kube-apiserver-pause-112657" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:11:01.940052  108549 pod_ready.go:94] pod "kube-apiserver-pause-112657" is "Ready"
	I1101 11:11:01.940085  108549 pod_ready.go:86] duration metric: took 2.008414514s for pod "kube-apiserver-pause-112657" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:11:01.943342  108549 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-112657" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:11:04.426967  108661 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.980516427s
	I1101 11:11:05.870721  108661 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.425775837s
	I1101 11:11:05.522265  108776 main.go:143] libmachine: domain kindnet-216814 has defined MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:11:05.522931  108776 main.go:143] libmachine: no network interface addresses found for domain kindnet-216814 (source=lease)
	I1101 11:11:05.522948  108776 main.go:143] libmachine: trying to list again with source=arp
	I1101 11:11:05.523429  108776 main.go:143] libmachine: unable to find current IP address of domain kindnet-216814 in network mk-kindnet-216814 (interfaces detected: [])
	I1101 11:11:05.523465  108776 retry.go:31] will retry after 4.363124933s: waiting for domain to come up
	I1101 11:11:02.951764  108549 pod_ready.go:94] pod "kube-controller-manager-pause-112657" is "Ready"
	I1101 11:11:02.951799  108549 pod_ready.go:86] duration metric: took 1.008372766s for pod "kube-controller-manager-pause-112657" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:11:02.954380  108549 pod_ready.go:83] waiting for pod "kube-proxy-pmht9" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:11:03.109880  108549 pod_ready.go:94] pod "kube-proxy-pmht9" is "Ready"
	I1101 11:11:03.109917  108549 pod_ready.go:86] duration metric: took 155.510185ms for pod "kube-proxy-pmht9" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:11:03.309471  108549 pod_ready.go:83] waiting for pod "kube-scheduler-pause-112657" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 11:11:05.317170  108549 pod_ready.go:104] pod "kube-scheduler-pause-112657" is not "Ready", error: <nil>
	W1101 11:11:07.816738  108549 pod_ready.go:104] pod "kube-scheduler-pause-112657" is not "Ready", error: <nil>
	I1101 11:11:07.947550  108661 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.503701615s
	I1101 11:11:07.968139  108661 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 11:11:07.982853  108661 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 11:11:08.005305  108661 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 11:11:08.005586  108661 kubeadm.go:319] [mark-control-plane] Marking the node auto-216814 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 11:11:08.018387  108661 kubeadm.go:319] [bootstrap-token] Using token: d6s76y.hcvkg7oo9lwcty05
	I1101 11:11:08.019677  108661 out.go:252]   - Configuring RBAC rules ...
	I1101 11:11:08.019830  108661 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 11:11:08.028340  108661 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 11:11:08.037193  108661 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 11:11:08.043840  108661 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 11:11:08.047602  108661 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 11:11:08.051345  108661 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 11:11:08.355915  108661 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 11:11:08.825271  108661 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 11:11:09.358877  108661 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 11:11:09.360166  108661 kubeadm.go:319] 
	I1101 11:11:09.360279  108661 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 11:11:09.360298  108661 kubeadm.go:319] 
	I1101 11:11:09.360400  108661 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 11:11:09.360410  108661 kubeadm.go:319] 
	I1101 11:11:09.360448  108661 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 11:11:09.360558  108661 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 11:11:09.360743  108661 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 11:11:09.360762  108661 kubeadm.go:319] 
	I1101 11:11:09.360839  108661 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 11:11:09.360849  108661 kubeadm.go:319] 
	I1101 11:11:09.360889  108661 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 11:11:09.360895  108661 kubeadm.go:319] 
	I1101 11:11:09.360978  108661 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 11:11:09.361113  108661 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 11:11:09.361212  108661 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 11:11:09.361222  108661 kubeadm.go:319] 
	I1101 11:11:09.361356  108661 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 11:11:09.361433  108661 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 11:11:09.361446  108661 kubeadm.go:319] 
	I1101 11:11:09.361585  108661 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token d6s76y.hcvkg7oo9lwcty05 \
	I1101 11:11:09.361746  108661 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ad8ee8749587d4da67d76f75358688c9a611301f34b35f940a9e7fa320504c7a \
	I1101 11:11:09.361787  108661 kubeadm.go:319] 	--control-plane 
	I1101 11:11:09.361796  108661 kubeadm.go:319] 
	I1101 11:11:09.361921  108661 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 11:11:09.361930  108661 kubeadm.go:319] 
	I1101 11:11:09.362010  108661 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token d6s76y.hcvkg7oo9lwcty05 \
	I1101 11:11:09.362131  108661 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:ad8ee8749587d4da67d76f75358688c9a611301f34b35f940a9e7fa320504c7a 
	I1101 11:11:09.363288  108661 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 11:11:09.363315  108661 cni.go:84] Creating CNI manager for ""
	I1101 11:11:09.363323  108661 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 11:11:09.365030  108661 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 11:11:10.317879  108549 pod_ready.go:94] pod "kube-scheduler-pause-112657" is "Ready"
	I1101 11:11:10.317912  108549 pod_ready.go:86] duration metric: took 7.008410496s for pod "kube-scheduler-pause-112657" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:11:10.317924  108549 pod_ready.go:40] duration metric: took 11.414002138s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 11:11:10.372210  108549 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 11:11:10.373933  108549 out.go:179] * Done! kubectl is now configured to use "pause-112657" cluster and "default" namespace by default
	I1101 11:11:11.656509  109110 start.go:364] duration metric: took 27.552691802s to acquireMachinesLock for "calico-216814"
	I1101 11:11:11.656643  109110 start.go:93] Provisioning new machine with config: &{Name:calico-216814 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:calico-216814 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 11:11:11.656773  109110 start.go:125] createHost starting for "" (driver="kvm2")
	I1101 11:11:09.366398  108661 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 11:11:09.383015  108661 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1101 11:11:09.410817  108661 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 11:11:09.410872  108661 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:11:09.410933  108661 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-216814 minikube.k8s.io/updated_at=2025_11_01T11_11_09_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845 minikube.k8s.io/name=auto-216814 minikube.k8s.io/primary=true
	I1101 11:11:09.581391  108661 ops.go:34] apiserver oom_adj: -16
	I1101 11:11:09.581441  108661 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:11:10.081750  108661 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:11:10.581584  108661 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:11:11.082113  108661 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:11:11.581788  108661 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:11:12.082261  108661 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 11:11:09.888964  108776 main.go:143] libmachine: domain kindnet-216814 has defined MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:11:09.890038  108776 main.go:143] libmachine: domain kindnet-216814 has current primary IP address 192.168.61.131 and MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:11:09.890065  108776 main.go:143] libmachine: found domain IP: 192.168.61.131
	I1101 11:11:09.890077  108776 main.go:143] libmachine: reserving static IP address...
	I1101 11:11:09.890705  108776 main.go:143] libmachine: unable to find host DHCP lease matching {name: "kindnet-216814", mac: "52:54:00:56:75:ca", ip: "192.168.61.131"} in network mk-kindnet-216814
	I1101 11:11:10.156750  108776 main.go:143] libmachine: reserved static IP address 192.168.61.131 for domain kindnet-216814
	I1101 11:11:10.156782  108776 main.go:143] libmachine: waiting for SSH...
	I1101 11:11:10.156790  108776 main.go:143] libmachine: Getting to WaitForSSH function...
	I1101 11:11:10.160567  108776 main.go:143] libmachine: domain kindnet-216814 has defined MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:11:10.161184  108776 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:75:ca", ip: ""} in network mk-kindnet-216814: {Iface:virbr3 ExpiryTime:2025-11-01 12:11:06 +0000 UTC Type:0 Mac:52:54:00:56:75:ca Iaid: IPaddr:192.168.61.131 Prefix:24 Hostname:minikube Clientid:01:52:54:00:56:75:ca}
	I1101 11:11:10.161226  108776 main.go:143] libmachine: domain kindnet-216814 has defined IP address 192.168.61.131 and MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:11:10.161451  108776 main.go:143] libmachine: Using SSH client type: native
	I1101 11:11:10.161848  108776 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.61.131 22 <nil> <nil>}
	I1101 11:11:10.161870  108776 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1101 11:11:10.269692  108776 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 11:11:10.270127  108776 main.go:143] libmachine: domain creation complete
	I1101 11:11:10.272202  108776 machine.go:94] provisionDockerMachine start ...
	I1101 11:11:10.274624  108776 main.go:143] libmachine: domain kindnet-216814 has defined MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:11:10.275029  108776 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:75:ca", ip: ""} in network mk-kindnet-216814: {Iface:virbr3 ExpiryTime:2025-11-01 12:11:06 +0000 UTC Type:0 Mac:52:54:00:56:75:ca Iaid: IPaddr:192.168.61.131 Prefix:24 Hostname:kindnet-216814 Clientid:01:52:54:00:56:75:ca}
	I1101 11:11:10.275053  108776 main.go:143] libmachine: domain kindnet-216814 has defined IP address 192.168.61.131 and MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:11:10.275206  108776 main.go:143] libmachine: Using SSH client type: native
	I1101 11:11:10.275398  108776 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.61.131 22 <nil> <nil>}
	I1101 11:11:10.275408  108776 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 11:11:10.382297  108776 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1101 11:11:10.382345  108776 buildroot.go:166] provisioning hostname "kindnet-216814"
	I1101 11:11:10.385916  108776 main.go:143] libmachine: domain kindnet-216814 has defined MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:11:10.386453  108776 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:75:ca", ip: ""} in network mk-kindnet-216814: {Iface:virbr3 ExpiryTime:2025-11-01 12:11:06 +0000 UTC Type:0 Mac:52:54:00:56:75:ca Iaid: IPaddr:192.168.61.131 Prefix:24 Hostname:kindnet-216814 Clientid:01:52:54:00:56:75:ca}
	I1101 11:11:10.386502  108776 main.go:143] libmachine: domain kindnet-216814 has defined IP address 192.168.61.131 and MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:11:10.386759  108776 main.go:143] libmachine: Using SSH client type: native
	I1101 11:11:10.387083  108776 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.61.131 22 <nil> <nil>}
	I1101 11:11:10.387102  108776 main.go:143] libmachine: About to run SSH command:
	sudo hostname kindnet-216814 && echo "kindnet-216814" | sudo tee /etc/hostname
	I1101 11:11:10.524040  108776 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-216814
	
	I1101 11:11:10.527587  108776 main.go:143] libmachine: domain kindnet-216814 has defined MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:11:10.528146  108776 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:75:ca", ip: ""} in network mk-kindnet-216814: {Iface:virbr3 ExpiryTime:2025-11-01 12:11:06 +0000 UTC Type:0 Mac:52:54:00:56:75:ca Iaid: IPaddr:192.168.61.131 Prefix:24 Hostname:kindnet-216814 Clientid:01:52:54:00:56:75:ca}
	I1101 11:11:10.528203  108776 main.go:143] libmachine: domain kindnet-216814 has defined IP address 192.168.61.131 and MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:11:10.528479  108776 main.go:143] libmachine: Using SSH client type: native
	I1101 11:11:10.528799  108776 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.61.131 22 <nil> <nil>}
	I1101 11:11:10.528830  108776 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-216814' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-216814/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-216814' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 11:11:10.649004  108776 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 11:11:10.649045  108776 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21830-70113/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-70113/.minikube}
	I1101 11:11:10.649077  108776 buildroot.go:174] setting up certificates
	I1101 11:11:10.649091  108776 provision.go:84] configureAuth start
	I1101 11:11:10.652497  108776 main.go:143] libmachine: domain kindnet-216814 has defined MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:11:10.653098  108776 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:75:ca", ip: ""} in network mk-kindnet-216814: {Iface:virbr3 ExpiryTime:2025-11-01 12:11:06 +0000 UTC Type:0 Mac:52:54:00:56:75:ca Iaid: IPaddr:192.168.61.131 Prefix:24 Hostname:kindnet-216814 Clientid:01:52:54:00:56:75:ca}
	I1101 11:11:10.653141  108776 main.go:143] libmachine: domain kindnet-216814 has defined IP address 192.168.61.131 and MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:11:10.656714  108776 main.go:143] libmachine: domain kindnet-216814 has defined MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:11:10.657223  108776 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:75:ca", ip: ""} in network mk-kindnet-216814: {Iface:virbr3 ExpiryTime:2025-11-01 12:11:06 +0000 UTC Type:0 Mac:52:54:00:56:75:ca Iaid: IPaddr:192.168.61.131 Prefix:24 Hostname:kindnet-216814 Clientid:01:52:54:00:56:75:ca}
	I1101 11:11:10.657251  108776 main.go:143] libmachine: domain kindnet-216814 has defined IP address 192.168.61.131 and MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:11:10.657443  108776 provision.go:143] copyHostCerts
	I1101 11:11:10.657513  108776 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-70113/.minikube/cert.pem, removing ...
	I1101 11:11:10.657548  108776 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-70113/.minikube/cert.pem
	I1101 11:11:10.657657  108776 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-70113/.minikube/cert.pem (1123 bytes)
	I1101 11:11:10.657794  108776 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-70113/.minikube/key.pem, removing ...
	I1101 11:11:10.657807  108776 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-70113/.minikube/key.pem
	I1101 11:11:10.657853  108776 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-70113/.minikube/key.pem (1675 bytes)
	I1101 11:11:10.657944  108776 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-70113/.minikube/ca.pem, removing ...
	I1101 11:11:10.657954  108776 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-70113/.minikube/ca.pem
	I1101 11:11:10.657992  108776 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-70113/.minikube/ca.pem (1082 bytes)
	I1101 11:11:10.658072  108776 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-70113/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca-key.pem org=jenkins.kindnet-216814 san=[127.0.0.1 192.168.61.131 kindnet-216814 localhost minikube]
	I1101 11:11:10.897419  108776 provision.go:177] copyRemoteCerts
	I1101 11:11:10.897484  108776 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 11:11:10.900518  108776 main.go:143] libmachine: domain kindnet-216814 has defined MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:11:10.900937  108776 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:75:ca", ip: ""} in network mk-kindnet-216814: {Iface:virbr3 ExpiryTime:2025-11-01 12:11:06 +0000 UTC Type:0 Mac:52:54:00:56:75:ca Iaid: IPaddr:192.168.61.131 Prefix:24 Hostname:kindnet-216814 Clientid:01:52:54:00:56:75:ca}
	I1101 11:11:10.900974  108776 main.go:143] libmachine: domain kindnet-216814 has defined IP address 192.168.61.131 and MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:11:10.901140  108776 sshutil.go:53] new ssh client: &{IP:192.168.61.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/kindnet-216814/id_rsa Username:docker}
	I1101 11:11:10.993106  108776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 11:11:11.037568  108776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1101 11:11:11.074328  108776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 11:11:11.114175  108776 provision.go:87] duration metric: took 465.065237ms to configureAuth
	I1101 11:11:11.114216  108776 buildroot.go:189] setting minikube options for container-runtime
	I1101 11:11:11.114444  108776 config.go:182] Loaded profile config "kindnet-216814": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:11:11.117879  108776 main.go:143] libmachine: domain kindnet-216814 has defined MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:11:11.118445  108776 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:75:ca", ip: ""} in network mk-kindnet-216814: {Iface:virbr3 ExpiryTime:2025-11-01 12:11:06 +0000 UTC Type:0 Mac:52:54:00:56:75:ca Iaid: IPaddr:192.168.61.131 Prefix:24 Hostname:kindnet-216814 Clientid:01:52:54:00:56:75:ca}
	I1101 11:11:11.118486  108776 main.go:143] libmachine: domain kindnet-216814 has defined IP address 192.168.61.131 and MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:11:11.118704  108776 main.go:143] libmachine: Using SSH client type: native
	I1101 11:11:11.119005  108776 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.61.131 22 <nil> <nil>}
	I1101 11:11:11.119033  108776 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 11:11:11.384373  108776 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 11:11:11.384407  108776 machine.go:97] duration metric: took 1.112180846s to provisionDockerMachine
	I1101 11:11:11.384422  108776 client.go:176] duration metric: took 22.503027253s to LocalClient.Create
	I1101 11:11:11.384445  108776 start.go:167] duration metric: took 22.503117607s to libmachine.API.Create "kindnet-216814"
	I1101 11:11:11.384455  108776 start.go:293] postStartSetup for "kindnet-216814" (driver="kvm2")
	I1101 11:11:11.384469  108776 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 11:11:11.384588  108776 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 11:11:11.387934  108776 main.go:143] libmachine: domain kindnet-216814 has defined MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:11:11.388434  108776 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:75:ca", ip: ""} in network mk-kindnet-216814: {Iface:virbr3 ExpiryTime:2025-11-01 12:11:06 +0000 UTC Type:0 Mac:52:54:00:56:75:ca Iaid: IPaddr:192.168.61.131 Prefix:24 Hostname:kindnet-216814 Clientid:01:52:54:00:56:75:ca}
	I1101 11:11:11.388472  108776 main.go:143] libmachine: domain kindnet-216814 has defined IP address 192.168.61.131 and MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:11:11.388682  108776 sshutil.go:53] new ssh client: &{IP:192.168.61.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/kindnet-216814/id_rsa Username:docker}
	I1101 11:11:11.476751  108776 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 11:11:11.482395  108776 info.go:137] Remote host: Buildroot 2025.02
	I1101 11:11:11.482422  108776 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-70113/.minikube/addons for local assets ...
	I1101 11:11:11.482499  108776 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-70113/.minikube/files for local assets ...
	I1101 11:11:11.482614  108776 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-70113/.minikube/files/etc/ssl/certs/739982.pem -> 739982.pem in /etc/ssl/certs
	I1101 11:11:11.482727  108776 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 11:11:11.498663  108776 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/files/etc/ssl/certs/739982.pem --> /etc/ssl/certs/739982.pem (1708 bytes)
	I1101 11:11:11.535219  108776 start.go:296] duration metric: took 150.743785ms for postStartSetup
	I1101 11:11:11.538623  108776 main.go:143] libmachine: domain kindnet-216814 has defined MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:11:11.539106  108776 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:75:ca", ip: ""} in network mk-kindnet-216814: {Iface:virbr3 ExpiryTime:2025-11-01 12:11:06 +0000 UTC Type:0 Mac:52:54:00:56:75:ca Iaid: IPaddr:192.168.61.131 Prefix:24 Hostname:kindnet-216814 Clientid:01:52:54:00:56:75:ca}
	I1101 11:11:11.539143  108776 main.go:143] libmachine: domain kindnet-216814 has defined IP address 192.168.61.131 and MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:11:11.539483  108776 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/kindnet-216814/config.json ...
	I1101 11:11:11.539725  108776 start.go:128] duration metric: took 22.660495944s to createHost
	I1101 11:11:11.542264  108776 main.go:143] libmachine: domain kindnet-216814 has defined MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:11:11.542710  108776 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:75:ca", ip: ""} in network mk-kindnet-216814: {Iface:virbr3 ExpiryTime:2025-11-01 12:11:06 +0000 UTC Type:0 Mac:52:54:00:56:75:ca Iaid: IPaddr:192.168.61.131 Prefix:24 Hostname:kindnet-216814 Clientid:01:52:54:00:56:75:ca}
	I1101 11:11:11.542740  108776 main.go:143] libmachine: domain kindnet-216814 has defined IP address 192.168.61.131 and MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:11:11.542967  108776 main.go:143] libmachine: Using SSH client type: native
	I1101 11:11:11.543197  108776 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.61.131 22 <nil> <nil>}
	I1101 11:11:11.543215  108776 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1101 11:11:11.656327  108776 main.go:143] libmachine: SSH cmd err, output: <nil>: 1761995471.617250451
	
	I1101 11:11:11.656349  108776 fix.go:216] guest clock: 1761995471.617250451
	I1101 11:11:11.656360  108776 fix.go:229] Guest: 2025-11-01 11:11:11.617250451 +0000 UTC Remote: 2025-11-01 11:11:11.5397427 +0000 UTC m=+54.197544231 (delta=77.507751ms)
	I1101 11:11:11.656382  108776 fix.go:200] guest clock delta is within tolerance: 77.507751ms
	I1101 11:11:11.656389  108776 start.go:83] releasing machines lock for "kindnet-216814", held for 22.777375123s
	I1101 11:11:11.661013  108776 main.go:143] libmachine: domain kindnet-216814 has defined MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:11:11.661599  108776 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:75:ca", ip: ""} in network mk-kindnet-216814: {Iface:virbr3 ExpiryTime:2025-11-01 12:11:06 +0000 UTC Type:0 Mac:52:54:00:56:75:ca Iaid: IPaddr:192.168.61.131 Prefix:24 Hostname:kindnet-216814 Clientid:01:52:54:00:56:75:ca}
	I1101 11:11:11.661644  108776 main.go:143] libmachine: domain kindnet-216814 has defined IP address 192.168.61.131 and MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:11:11.662353  108776 ssh_runner.go:195] Run: cat /version.json
	I1101 11:11:11.662419  108776 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 11:11:11.667405  108776 main.go:143] libmachine: domain kindnet-216814 has defined MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:11:11.667981  108776 main.go:143] libmachine: domain kindnet-216814 has defined MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:11:11.668478  108776 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:75:ca", ip: ""} in network mk-kindnet-216814: {Iface:virbr3 ExpiryTime:2025-11-01 12:11:06 +0000 UTC Type:0 Mac:52:54:00:56:75:ca Iaid: IPaddr:192.168.61.131 Prefix:24 Hostname:kindnet-216814 Clientid:01:52:54:00:56:75:ca}
	I1101 11:11:11.668510  108776 main.go:143] libmachine: domain kindnet-216814 has defined IP address 192.168.61.131 and MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:11:11.668688  108776 sshutil.go:53] new ssh client: &{IP:192.168.61.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/kindnet-216814/id_rsa Username:docker}
	I1101 11:11:11.669461  108776 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:56:75:ca", ip: ""} in network mk-kindnet-216814: {Iface:virbr3 ExpiryTime:2025-11-01 12:11:06 +0000 UTC Type:0 Mac:52:54:00:56:75:ca Iaid: IPaddr:192.168.61.131 Prefix:24 Hostname:kindnet-216814 Clientid:01:52:54:00:56:75:ca}
	I1101 11:11:11.669501  108776 main.go:143] libmachine: domain kindnet-216814 has defined IP address 192.168.61.131 and MAC address 52:54:00:56:75:ca in network mk-kindnet-216814
	I1101 11:11:11.669893  108776 sshutil.go:53] new ssh client: &{IP:192.168.61.131 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/kindnet-216814/id_rsa Username:docker}
	I1101 11:11:11.755209  108776 ssh_runner.go:195] Run: systemctl --version
	I1101 11:11:11.783737  108776 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 11:11:11.954295  108776 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 11:11:11.962589  108776 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 11:11:11.962680  108776 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 11:11:11.987177  108776 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 11:11:11.987205  108776 start.go:496] detecting cgroup driver to use...
	I1101 11:11:11.987275  108776 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 11:11:12.013189  108776 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 11:11:12.035735  108776 docker.go:218] disabling cri-docker service (if available) ...
	I1101 11:11:12.035820  108776 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 11:11:12.058698  108776 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 11:11:12.080031  108776 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 11:11:12.282008  108776 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	
	
	==> CRI-O <==
	Nov 01 11:11:13 pause-112657 crio[2544]: time="2025-11-01 11:11:13.600928117Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6bcf87e8-3651-4ee9-b094-7c20165c7fe7 name=/runtime.v1.RuntimeService/Version
	Nov 01 11:11:13 pause-112657 crio[2544]: time="2025-11-01 11:11:13.605189890Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0ec799d7-1586-4ad8-8f2e-09044c0c65fa name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 11:11:13 pause-112657 crio[2544]: time="2025-11-01 11:11:13.606179119Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761995473606140239,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0ec799d7-1586-4ad8-8f2e-09044c0c65fa name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 11:11:13 pause-112657 crio[2544]: time="2025-11-01 11:11:13.607696010Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8b356428-0a3c-42aa-9fcd-6d87a4b0bc9f name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:11:13 pause-112657 crio[2544]: time="2025-11-01 11:11:13.607819046Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8b356428-0a3c-42aa-9fcd-6d87a4b0bc9f name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:11:13 pause-112657 crio[2544]: time="2025-11-01 11:11:13.608695907Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cd4cd50e19c80294183a84f1d6b4ba1319f1f8e73be2fdbbb0ea63eb0cfa3d1e,PodSandboxId:bfdaa75efcfd40e8e5892fe6d876e50466630d96cf632e6cfb152ae498930807,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761995451043743074,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daeea0fec952be898c7676958c513df5,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\"
:\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4aa50e95ae4ca4ff8424b31d4eed01ebd730345a23823c47c6c7c5d5f53b248,PodSandboxId:96707122e744b68bf2c919ddda53e4fe7bb8e933d99057cab719485e8b9eefff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761995451068389604,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12cf0f834b507b710e971bc13c0c41be,},Annot
ations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9ea1384edf60f795b201ee53bb5bb7090a53d71fa1667e6af09c1fdcfbe0740,PodSandboxId:6fe98b0d10f58aca88761422242b87e65d4fe9185934279757b3f309204ea8c2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761995451047336429,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-112657
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5858f94a269fd1471ef44747e4b5a67,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bc15987bc0dc62809b59d9436ba53710885fb23be0858d321037f784b4985be,PodSandboxId:2f66f988ef08fa7c0104edd97f6f605f5f588f33e0d4b28bfca9f4064122eedd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761995438774878738,Labels:map[string]string{
io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-crbpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f25a6b07-34dc-4d43-9b5a-59ca2a8be742,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdb1f5d3d88e5858fba14591c807783a4a562eb1c18635ab0a6d79b3cfaf2963,PodSandboxId:a90f13f4843643d5808f97b5aa874429995acd58fc58f7dd7554b9cccf033519,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761995438002251701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de6a9ba0be63604a02fcdf568085f944,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cac02c746090b038b03b7e3666ef90276d94f274eb8926c189d832b02e7d27b,PodSandboxId:ad282dbcda533
473f977b16f887db85cb11621f5777cf7ceac5e424f29fc7daa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761995437398010668,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pmht9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93cedff1-d264-4c71-af06-95e4b53e637e,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8e195bbfd8af4cc01974cac90a1813602ea186e4abea2dd378927416f5dc0b5,PodSandboxId:6fe98b0d10f58aca88761422242b87e65d4fe918593427
9757b3f309204ea8c2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761995437537846506,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5858f94a269fd1471ef44747e4b5a67,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f557d4e8f14008c0f3af6
10b5e7d21f6bc34a9ef9b305c98652539ec8b3a059,PodSandboxId:96707122e744b68bf2c919ddda53e4fe7bb8e933d99057cab719485e8b9eefff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761995437321516572,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12cf0f834b507b710e971bc13c0c41be,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationM
essagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5f4bd825d40113738943da3b6f7b2025cb9516c6519f050d0f26455627a6e67,PodSandboxId:bfdaa75efcfd40e8e5892fe6d876e50466630d96cf632e6cfb152ae498930807,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761995437231711940,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daeea0fec952be898c7676958c513df5,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:362c37d6f3cbe57d64055d9d200a6b5d819a9ee4dde9b2fc09af53b1741e8b3b,PodSandboxId:a4ef5d2bb179947a42573697c25401c0212c94b3b46dd36c8c4d666705dcaed2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761995403066370455,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-crbpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f25a6b07-34dc-4d43-9b5a-59ca2a8be742,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns
\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b140a0c1d767d4f7ece6853aeb5d9d8f8f58e137400cc9bf3910f496e71c1b79,PodSandboxId:67b8fa24902cc65de9c1fb88a1b0a1e960ae441974a0c6b3442907e5b9c845e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:17619
95401486043006,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pmht9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93cedff1-d264-4c71-af06-95e4b53e637e,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17af9600e453ef15684f26ea76667b808bbd1ca091d1d10572bc54428e7aa950,PodSandboxId:e78b0a7a8f0bae29bf39ab429ab25b51b7eda6f250360739d999c601048ccbc0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761995388421602581,Labels:map[string]string{
io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de6a9ba0be63604a02fcdf568085f944,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8b356428-0a3c-42aa-9fcd-6d87a4b0bc9f name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:11:13 pause-112657 crio[2544]: time="2025-11-01 11:11:13.664704947Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=905e961c-f76e-464c-9781-5aa6dd9e6bc5 name=/runtime.v1.RuntimeService/Version
	Nov 01 11:11:13 pause-112657 crio[2544]: time="2025-11-01 11:11:13.664806056Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=905e961c-f76e-464c-9781-5aa6dd9e6bc5 name=/runtime.v1.RuntimeService/Version
	Nov 01 11:11:13 pause-112657 crio[2544]: time="2025-11-01 11:11:13.666371988Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7bcf5773-a817-48d8-bb3e-c1baad58ca00 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 11:11:13 pause-112657 crio[2544]: time="2025-11-01 11:11:13.666795061Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761995473666770391,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7bcf5773-a817-48d8-bb3e-c1baad58ca00 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 11:11:13 pause-112657 crio[2544]: time="2025-11-01 11:11:13.667573405Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f6ca4b8b-b4cd-4ecf-aab2-07c0cec0ca4a name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:11:13 pause-112657 crio[2544]: time="2025-11-01 11:11:13.667675280Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f6ca4b8b-b4cd-4ecf-aab2-07c0cec0ca4a name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:11:13 pause-112657 crio[2544]: time="2025-11-01 11:11:13.667998511Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cd4cd50e19c80294183a84f1d6b4ba1319f1f8e73be2fdbbb0ea63eb0cfa3d1e,PodSandboxId:bfdaa75efcfd40e8e5892fe6d876e50466630d96cf632e6cfb152ae498930807,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761995451043743074,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daeea0fec952be898c7676958c513df5,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\"
:\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4aa50e95ae4ca4ff8424b31d4eed01ebd730345a23823c47c6c7c5d5f53b248,PodSandboxId:96707122e744b68bf2c919ddda53e4fe7bb8e933d99057cab719485e8b9eefff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761995451068389604,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12cf0f834b507b710e971bc13c0c41be,},Annot
ations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9ea1384edf60f795b201ee53bb5bb7090a53d71fa1667e6af09c1fdcfbe0740,PodSandboxId:6fe98b0d10f58aca88761422242b87e65d4fe9185934279757b3f309204ea8c2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761995451047336429,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-112657
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5858f94a269fd1471ef44747e4b5a67,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bc15987bc0dc62809b59d9436ba53710885fb23be0858d321037f784b4985be,PodSandboxId:2f66f988ef08fa7c0104edd97f6f605f5f588f33e0d4b28bfca9f4064122eedd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761995438774878738,Labels:map[string]string{
io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-crbpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f25a6b07-34dc-4d43-9b5a-59ca2a8be742,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdb1f5d3d88e5858fba14591c807783a4a562eb1c18635ab0a6d79b3cfaf2963,PodSandboxId:a90f13f4843643d5808f97b5aa874429995acd58fc58f7dd7554b9cccf033519,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761995438002251701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de6a9ba0be63604a02fcdf568085f944,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cac02c746090b038b03b7e3666ef90276d94f274eb8926c189d832b02e7d27b,PodSandboxId:ad282dbcda533
473f977b16f887db85cb11621f5777cf7ceac5e424f29fc7daa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761995437398010668,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pmht9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93cedff1-d264-4c71-af06-95e4b53e637e,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8e195bbfd8af4cc01974cac90a1813602ea186e4abea2dd378927416f5dc0b5,PodSandboxId:6fe98b0d10f58aca88761422242b87e65d4fe918593427
9757b3f309204ea8c2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761995437537846506,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5858f94a269fd1471ef44747e4b5a67,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f557d4e8f14008c0f3af6
10b5e7d21f6bc34a9ef9b305c98652539ec8b3a059,PodSandboxId:96707122e744b68bf2c919ddda53e4fe7bb8e933d99057cab719485e8b9eefff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761995437321516572,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12cf0f834b507b710e971bc13c0c41be,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationM
essagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5f4bd825d40113738943da3b6f7b2025cb9516c6519f050d0f26455627a6e67,PodSandboxId:bfdaa75efcfd40e8e5892fe6d876e50466630d96cf632e6cfb152ae498930807,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761995437231711940,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daeea0fec952be898c7676958c513df5,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:362c37d6f3cbe57d64055d9d200a6b5d819a9ee4dde9b2fc09af53b1741e8b3b,PodSandboxId:a4ef5d2bb179947a42573697c25401c0212c94b3b46dd36c8c4d666705dcaed2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761995403066370455,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-crbpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f25a6b07-34dc-4d43-9b5a-59ca2a8be742,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns
\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b140a0c1d767d4f7ece6853aeb5d9d8f8f58e137400cc9bf3910f496e71c1b79,PodSandboxId:67b8fa24902cc65de9c1fb88a1b0a1e960ae441974a0c6b3442907e5b9c845e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:17619
95401486043006,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pmht9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93cedff1-d264-4c71-af06-95e4b53e637e,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17af9600e453ef15684f26ea76667b808bbd1ca091d1d10572bc54428e7aa950,PodSandboxId:e78b0a7a8f0bae29bf39ab429ab25b51b7eda6f250360739d999c601048ccbc0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761995388421602581,Labels:map[string]string{
io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de6a9ba0be63604a02fcdf568085f944,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f6ca4b8b-b4cd-4ecf-aab2-07c0cec0ca4a name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:11:13 pause-112657 crio[2544]: time="2025-11-01 11:11:13.669003913Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=a2414d1c-b4af-4090-be92-dded10466f61 name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 01 11:11:13 pause-112657 crio[2544]: time="2025-11-01 11:11:13.669412976Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:2f66f988ef08fa7c0104edd97f6f605f5f588f33e0d4b28bfca9f4064122eedd,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-crbpm,Uid:f25a6b07-34dc-4d43-9b5a-59ca2a8be742,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1761995437415179375,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-crbpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f25a6b07-34dc-4d43-9b5a-59ca2a8be742,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T11:10:00.898674944Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a90f13f4843643d5808f97b5aa874429995acd58fc58f7dd7554b9cccf033519,Metadata:&PodSandboxMetadata{Name:etcd-pause-112657,Uid:de6a9ba0be63604a02fcdf568085f944,Namespace:kube-system,Attempt:1,
},State:SANDBOX_READY,CreatedAt:1761995437398426387,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de6a9ba0be63604a02fcdf568085f944,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.83.133:2379,kubernetes.io/config.hash: de6a9ba0be63604a02fcdf568085f944,kubernetes.io/config.seen: 2025-11-01T11:09:55.238870908Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6fe98b0d10f58aca88761422242b87e65d4fe9185934279757b3f309204ea8c2,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-112657,Uid:f5858f94a269fd1471ef44747e4b5a67,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1761995437123249164,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.p
od.uid: f5858f94a269fd1471ef44747e4b5a67,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.83.133:8443,kubernetes.io/config.hash: f5858f94a269fd1471ef44747e4b5a67,kubernetes.io/config.seen: 2025-11-01T11:09:55.238875419Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:96707122e744b68bf2c919ddda53e4fe7bb8e933d99057cab719485e8b9eefff,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-112657,Uid:12cf0f834b507b710e971bc13c0c41be,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1761995436725745568,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12cf0f834b507b710e971bc13c0c41be,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 12cf0f834b507b710e971bc13c0c41be,kubernetes.io/config.seen: 2025-11-01T11:09:55.238877542Z,kuberne
tes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bfdaa75efcfd40e8e5892fe6d876e50466630d96cf632e6cfb152ae498930807,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-112657,Uid:daeea0fec952be898c7676958c513df5,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1761995436705560812,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daeea0fec952be898c7676958c513df5,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: daeea0fec952be898c7676958c513df5,kubernetes.io/config.seen: 2025-11-01T11:09:55.238876653Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ad282dbcda533473f977b16f887db85cb11621f5777cf7ceac5e424f29fc7daa,Metadata:&PodSandboxMetadata{Name:kube-proxy-pmht9,Uid:93cedff1-d264-4c71-af06-95e4b53e637e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,Cre
atedAt:1761995436645267530,Labels:map[string]string{controller-revision-hash: 66486579fc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-pmht9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93cedff1-d264-4c71-af06-95e4b53e637e,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T11:10:00.428464071Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a4ef5d2bb179947a42573697c25401c0212c94b3b46dd36c8c4d666705dcaed2,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-crbpm,Uid:f25a6b07-34dc-4d43-9b5a-59ca2a8be742,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1761995402726488662,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-crbpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f25a6b07-34dc-4d43-9b5a-59ca2a8be742,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io
/config.seen: 2025-11-01T11:10:00.898674944Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:67b8fa24902cc65de9c1fb88a1b0a1e960ae441974a0c6b3442907e5b9c845e1,Metadata:&PodSandboxMetadata{Name:kube-proxy-pmht9,Uid:93cedff1-d264-4c71-af06-95e4b53e637e,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1761995400753136990,Labels:map[string]string{controller-revision-hash: 66486579fc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-pmht9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93cedff1-d264-4c71-af06-95e4b53e637e,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T11:10:00.428464071Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e78b0a7a8f0bae29bf39ab429ab25b51b7eda6f250360739d999c601048ccbc0,Metadata:&PodSandboxMetadata{Name:etcd-pause-112657,Uid:de6a9ba0be63604a02fcdf568085f944,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:176
1995388090422435,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de6a9ba0be63604a02fcdf568085f944,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.83.133:2379,kubernetes.io/config.hash: de6a9ba0be63604a02fcdf568085f944,kubernetes.io/config.seen: 2025-11-01T11:09:47.239586740Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=a2414d1c-b4af-4090-be92-dded10466f61 name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 01 11:11:13 pause-112657 crio[2544]: time="2025-11-01 11:11:13.671509864Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2f7d1903-0289-4d00-9940-d0a13b7b4cac name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:11:13 pause-112657 crio[2544]: time="2025-11-01 11:11:13.671596395Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2f7d1903-0289-4d00-9940-d0a13b7b4cac name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:11:13 pause-112657 crio[2544]: time="2025-11-01 11:11:13.672210848Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cd4cd50e19c80294183a84f1d6b4ba1319f1f8e73be2fdbbb0ea63eb0cfa3d1e,PodSandboxId:bfdaa75efcfd40e8e5892fe6d876e50466630d96cf632e6cfb152ae498930807,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761995451043743074,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daeea0fec952be898c7676958c513df5,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\"
:\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4aa50e95ae4ca4ff8424b31d4eed01ebd730345a23823c47c6c7c5d5f53b248,PodSandboxId:96707122e744b68bf2c919ddda53e4fe7bb8e933d99057cab719485e8b9eefff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761995451068389604,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12cf0f834b507b710e971bc13c0c41be,},Annot
ations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9ea1384edf60f795b201ee53bb5bb7090a53d71fa1667e6af09c1fdcfbe0740,PodSandboxId:6fe98b0d10f58aca88761422242b87e65d4fe9185934279757b3f309204ea8c2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761995451047336429,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-112657
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5858f94a269fd1471ef44747e4b5a67,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bc15987bc0dc62809b59d9436ba53710885fb23be0858d321037f784b4985be,PodSandboxId:2f66f988ef08fa7c0104edd97f6f605f5f588f33e0d4b28bfca9f4064122eedd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761995438774878738,Labels:map[string]string{
io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-crbpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f25a6b07-34dc-4d43-9b5a-59ca2a8be742,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdb1f5d3d88e5858fba14591c807783a4a562eb1c18635ab0a6d79b3cfaf2963,PodSandboxId:a90f13f4843643d5808f97b5aa874429995acd58fc58f7dd7554b9cccf033519,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761995438002251701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de6a9ba0be63604a02fcdf568085f944,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cac02c746090b038b03b7e3666ef90276d94f274eb8926c189d832b02e7d27b,PodSandboxId:ad282dbcda533
473f977b16f887db85cb11621f5777cf7ceac5e424f29fc7daa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761995437398010668,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pmht9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93cedff1-d264-4c71-af06-95e4b53e637e,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8e195bbfd8af4cc01974cac90a1813602ea186e4abea2dd378927416f5dc0b5,PodSandboxId:6fe98b0d10f58aca88761422242b87e65d4fe918593427
9757b3f309204ea8c2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761995437537846506,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5858f94a269fd1471ef44747e4b5a67,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f557d4e8f14008c0f3af6
10b5e7d21f6bc34a9ef9b305c98652539ec8b3a059,PodSandboxId:96707122e744b68bf2c919ddda53e4fe7bb8e933d99057cab719485e8b9eefff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761995437321516572,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12cf0f834b507b710e971bc13c0c41be,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationM
essagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5f4bd825d40113738943da3b6f7b2025cb9516c6519f050d0f26455627a6e67,PodSandboxId:bfdaa75efcfd40e8e5892fe6d876e50466630d96cf632e6cfb152ae498930807,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761995437231711940,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daeea0fec952be898c7676958c513df5,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:362c37d6f3cbe57d64055d9d200a6b5d819a9ee4dde9b2fc09af53b1741e8b3b,PodSandboxId:a4ef5d2bb179947a42573697c25401c0212c94b3b46dd36c8c4d666705dcaed2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761995403066370455,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-crbpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f25a6b07-34dc-4d43-9b5a-59ca2a8be742,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns
\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b140a0c1d767d4f7ece6853aeb5d9d8f8f58e137400cc9bf3910f496e71c1b79,PodSandboxId:67b8fa24902cc65de9c1fb88a1b0a1e960ae441974a0c6b3442907e5b9c845e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:17619
95401486043006,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pmht9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93cedff1-d264-4c71-af06-95e4b53e637e,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17af9600e453ef15684f26ea76667b808bbd1ca091d1d10572bc54428e7aa950,PodSandboxId:e78b0a7a8f0bae29bf39ab429ab25b51b7eda6f250360739d999c601048ccbc0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761995388421602581,Labels:map[string]string{
io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de6a9ba0be63604a02fcdf568085f944,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2f7d1903-0289-4d00-9940-d0a13b7b4cac name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:11:13 pause-112657 crio[2544]: time="2025-11-01 11:11:13.735092539Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e95648ad-aa45-4524-8f59-4f6314235e11 name=/runtime.v1.RuntimeService/Version
	Nov 01 11:11:13 pause-112657 crio[2544]: time="2025-11-01 11:11:13.735223034Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e95648ad-aa45-4524-8f59-4f6314235e11 name=/runtime.v1.RuntimeService/Version
	Nov 01 11:11:13 pause-112657 crio[2544]: time="2025-11-01 11:11:13.737865198Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=197d4200-f866-4235-98e5-c533c0fc45ee name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 11:11:13 pause-112657 crio[2544]: time="2025-11-01 11:11:13.738581993Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761995473738544663,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=197d4200-f866-4235-98e5-c533c0fc45ee name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 11:11:13 pause-112657 crio[2544]: time="2025-11-01 11:11:13.739950005Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=44a6fb59-f190-411e-8ef6-671033deea55 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:11:13 pause-112657 crio[2544]: time="2025-11-01 11:11:13.740060581Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=44a6fb59-f190-411e-8ef6-671033deea55 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:11:13 pause-112657 crio[2544]: time="2025-11-01 11:11:13.741061897Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cd4cd50e19c80294183a84f1d6b4ba1319f1f8e73be2fdbbb0ea63eb0cfa3d1e,PodSandboxId:bfdaa75efcfd40e8e5892fe6d876e50466630d96cf632e6cfb152ae498930807,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761995451043743074,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daeea0fec952be898c7676958c513df5,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\"
:\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a4aa50e95ae4ca4ff8424b31d4eed01ebd730345a23823c47c6c7c5d5f53b248,PodSandboxId:96707122e744b68bf2c919ddda53e4fe7bb8e933d99057cab719485e8b9eefff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761995451068389604,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12cf0f834b507b710e971bc13c0c41be,},Annot
ations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9ea1384edf60f795b201ee53bb5bb7090a53d71fa1667e6af09c1fdcfbe0740,PodSandboxId:6fe98b0d10f58aca88761422242b87e65d4fe9185934279757b3f309204ea8c2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761995451047336429,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-112657
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5858f94a269fd1471ef44747e4b5a67,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bc15987bc0dc62809b59d9436ba53710885fb23be0858d321037f784b4985be,PodSandboxId:2f66f988ef08fa7c0104edd97f6f605f5f588f33e0d4b28bfca9f4064122eedd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761995438774878738,Labels:map[string]string{
io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-crbpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f25a6b07-34dc-4d43-9b5a-59ca2a8be742,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fdb1f5d3d88e5858fba14591c807783a4a562eb1c18635ab0a6d79b3cfaf2963,PodSandboxId:a90f13f4843643d5808f97b5aa874429995acd58fc58f7dd7554b9cccf033519,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761995438002251701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de6a9ba0be63604a02fcdf568085f944,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cac02c746090b038b03b7e3666ef90276d94f274eb8926c189d832b02e7d27b,PodSandboxId:ad282dbcda533
473f977b16f887db85cb11621f5777cf7ceac5e424f29fc7daa,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761995437398010668,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pmht9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93cedff1-d264-4c71-af06-95e4b53e637e,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8e195bbfd8af4cc01974cac90a1813602ea186e4abea2dd378927416f5dc0b5,PodSandboxId:6fe98b0d10f58aca88761422242b87e65d4fe918593427
9757b3f309204ea8c2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761995437537846506,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5858f94a269fd1471ef44747e4b5a67,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f557d4e8f14008c0f3af6
10b5e7d21f6bc34a9ef9b305c98652539ec8b3a059,PodSandboxId:96707122e744b68bf2c919ddda53e4fe7bb8e933d99057cab719485e8b9eefff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761995437321516572,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12cf0f834b507b710e971bc13c0c41be,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationM
essagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5f4bd825d40113738943da3b6f7b2025cb9516c6519f050d0f26455627a6e67,PodSandboxId:bfdaa75efcfd40e8e5892fe6d876e50466630d96cf632e6cfb152ae498930807,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761995437231711940,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daeea0fec952be898c7676958c513df5,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.
container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:362c37d6f3cbe57d64055d9d200a6b5d819a9ee4dde9b2fc09af53b1741e8b3b,PodSandboxId:a4ef5d2bb179947a42573697c25401c0212c94b3b46dd36c8c4d666705dcaed2,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761995403066370455,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-crbpm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f25a6b07-34dc-4d43-9b5a-59ca2a8be742,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns
\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b140a0c1d767d4f7ece6853aeb5d9d8f8f58e137400cc9bf3910f496e71c1b79,PodSandboxId:67b8fa24902cc65de9c1fb88a1b0a1e960ae441974a0c6b3442907e5b9c845e1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:17619
95401486043006,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pmht9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93cedff1-d264-4c71-af06-95e4b53e637e,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17af9600e453ef15684f26ea76667b808bbd1ca091d1d10572bc54428e7aa950,PodSandboxId:e78b0a7a8f0bae29bf39ab429ab25b51b7eda6f250360739d999c601048ccbc0,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761995388421602581,Labels:map[string]string{
io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-112657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de6a9ba0be63604a02fcdf568085f944,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=44a6fb59-f190-411e-8ef6-671033deea55 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a4aa50e95ae4c       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   22 seconds ago       Running             kube-scheduler            2                   96707122e744b       kube-scheduler-pause-112657
	f9ea1384edf60       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   22 seconds ago       Running             kube-apiserver            2                   6fe98b0d10f58       kube-apiserver-pause-112657
	cd4cd50e19c80       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   22 seconds ago       Running             kube-controller-manager   2                   bfdaa75efcfd4       kube-controller-manager-pause-112657
	2bc15987bc0dc       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   35 seconds ago       Running             coredns                   1                   2f66f988ef08f       coredns-66bc5c9577-crbpm
	fdb1f5d3d88e5       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   35 seconds ago       Running             etcd                      1                   a90f13f484364       etcd-pause-112657
	f8e195bbfd8af       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   36 seconds ago       Exited              kube-apiserver            1                   6fe98b0d10f58       kube-apiserver-pause-112657
	8cac02c746090       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   36 seconds ago       Running             kube-proxy                1                   ad282dbcda533       kube-proxy-pmht9
	4f557d4e8f140       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   36 seconds ago       Exited              kube-scheduler            1                   96707122e744b       kube-scheduler-pause-112657
	a5f4bd825d401       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   36 seconds ago       Exited              kube-controller-manager   1                   bfdaa75efcfd4       kube-controller-manager-pause-112657
	362c37d6f3cbe       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   About a minute ago   Exited              coredns                   0                   a4ef5d2bb1799       coredns-66bc5c9577-crbpm
	b140a0c1d767d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   About a minute ago   Exited              kube-proxy                0                   67b8fa24902cc       kube-proxy-pmht9
	17af9600e453e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   About a minute ago   Exited              etcd                      0                   e78b0a7a8f0ba       etcd-pause-112657
	
	
	==> coredns [2bc15987bc0dc62809b59d9436ba53710885fb23be0858d321037f784b4985be] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59844 - 12657 "HINFO IN 7001407449660026208.7607439692613121171. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.077808099s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [362c37d6f3cbe57d64055d9d200a6b5d819a9ee4dde9b2fc09af53b1741e8b3b] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54699 - 5061 "HINFO IN 8553294983351850771.5981119637402597002. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.091053634s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-112657
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-112657
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=pause-112657
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T11_09_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 11:09:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-112657
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 11:11:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 11:10:55 +0000   Sat, 01 Nov 2025 11:09:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 11:10:55 +0000   Sat, 01 Nov 2025 11:09:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 11:10:55 +0000   Sat, 01 Nov 2025 11:09:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 11:10:55 +0000   Sat, 01 Nov 2025 11:09:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.133
	  Hostname:    pause-112657
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 ec533cff008b47edbf935af9f3a03b16
	  System UUID:                ec533cff-008b-47ed-bf93-5af9f3a03b16
	  Boot ID:                    96b85bcf-a6ae-472a-891b-66a32f625306
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-crbpm                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     74s
	  kube-system                 etcd-pause-112657                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         79s
	  kube-system                 kube-apiserver-pause-112657             250m (12%)    0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-controller-manager-pause-112657    200m (10%)    0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-proxy-pmht9                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 kube-scheduler-pause-112657             100m (5%)     0 (0%)      0 (0%)           0 (0%)         79s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 72s                kube-proxy       
	  Normal  Starting                 16s                kube-proxy       
	  Normal  NodeHasSufficientPID     87s (x7 over 87s)  kubelet          Node pause-112657 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    87s (x8 over 87s)  kubelet          Node pause-112657 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  87s (x8 over 87s)  kubelet          Node pause-112657 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  87s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 79s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  79s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  79s                kubelet          Node pause-112657 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    79s                kubelet          Node pause-112657 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     79s                kubelet          Node pause-112657 status is now: NodeHasSufficientPID
	  Normal  NodeReady                78s                kubelet          Node pause-112657 status is now: NodeReady
	  Normal  RegisteredNode           75s                node-controller  Node pause-112657 event: Registered Node pause-112657 in Controller
	  Normal  Starting                 24s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)  kubelet          Node pause-112657 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)  kubelet          Node pause-112657 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)  kubelet          Node pause-112657 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15s                node-controller  Node pause-112657 event: Registered Node pause-112657 in Controller
	
	
	==> dmesg <==
	[Nov 1 11:09] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001328] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.007052] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.192326] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.116118] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.119933] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.098113] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.153447] kauditd_printk_skb: 171 callbacks suppressed
	[Nov 1 11:10] kauditd_printk_skb: 19 callbacks suppressed
	[  +0.162289] kauditd_printk_skb: 189 callbacks suppressed
	[  +7.275848] kauditd_printk_skb: 56 callbacks suppressed
	[  +0.130047] kauditd_printk_skb: 253 callbacks suppressed
	[  +7.363826] kauditd_printk_skb: 63 callbacks suppressed
	[Nov 1 11:11] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [17af9600e453ef15684f26ea76667b808bbd1ca091d1d10572bc54428e7aa950] <==
	{"level":"warn","ts":"2025-11-01T11:09:58.540163Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"147.338753ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T11:09:58.540254Z","caller":"traceutil/trace.go:172","msg":"trace[1889406150] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:0; response_revision:314; }","duration":"147.444199ms","start":"2025-11-01T11:09:58.392800Z","end":"2025-11-01T11:09:58.540244Z","steps":["trace[1889406150] 'agreement among raft nodes before linearized reading'  (duration: 147.239131ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T11:09:58.539449Z","caller":"traceutil/trace.go:172","msg":"trace[1696657071] linearizableReadLoop","detail":"{readStateIndex:321; appliedIndex:321; }","duration":"146.625061ms","start":"2025-11-01T11:09:58.392803Z","end":"2025-11-01T11:09:58.539428Z","steps":["trace[1696657071] 'read index received'  (duration: 146.568319ms)","trace[1696657071] 'applied index is now lower than readState.Index'  (duration: 55.438µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T11:09:58.555752Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.316573ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" limit:1 ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2025-11-01T11:09:58.555810Z","caller":"traceutil/trace.go:172","msg":"trace[1471778077] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:314; }","duration":"110.386016ms","start":"2025-11-01T11:09:58.445410Z","end":"2025-11-01T11:09:58.555796Z","steps":["trace[1471778077] 'agreement among raft nodes before linearized reading'  (duration: 100.218868ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T11:10:05.869973Z","caller":"traceutil/trace.go:172","msg":"trace[1617331651] transaction","detail":"{read_only:false; response_revision:424; number_of_response:1; }","duration":"236.68234ms","start":"2025-11-01T11:10:05.633272Z","end":"2025-11-01T11:10:05.869954Z","steps":["trace[1617331651] 'process raft request'  (duration: 160.95613ms)","trace[1617331651] 'compare'  (duration: 75.472834ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T11:10:06.047802Z","caller":"traceutil/trace.go:172","msg":"trace[226132889] transaction","detail":"{read_only:false; response_revision:425; number_of_response:1; }","duration":"127.853495ms","start":"2025-11-01T11:10:05.919900Z","end":"2025-11-01T11:10:06.047753Z","steps":["trace[226132889] 'process raft request'  (duration: 127.209002ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T11:10:20.879013Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-01T11:10:20.879212Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-112657","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.133:2380"],"advertise-client-urls":["https://192.168.83.133:2379"]}
	{"level":"error","ts":"2025-11-01T11:10:20.888066Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T11:10:20.965891Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T11:10:20.965974Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T11:10:20.965993Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"574f8e030020bbf0","current-leader-member-id":"574f8e030020bbf0"}
	{"level":"info","ts":"2025-11-01T11:10:20.966087Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-01T11:10:20.966096Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-01T11:10:20.966349Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.83.133:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T11:10:20.966458Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.83.133:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T11:10:20.966487Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.83.133:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-01T11:10:20.966649Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T11:10:20.966759Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T11:10:20.966779Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T11:10:20.969274Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.83.133:2380"}
	{"level":"error","ts":"2025-11-01T11:10:20.969621Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.83.133:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T11:10:20.969664Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.83.133:2380"}
	{"level":"info","ts":"2025-11-01T11:10:20.969748Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-112657","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.83.133:2380"],"advertise-client-urls":["https://192.168.83.133:2379"]}
	
	
	==> etcd [fdb1f5d3d88e5858fba14591c807783a4a562eb1c18635ab0a6d79b3cfaf2963] <==
	{"level":"info","ts":"2025-11-01T11:10:56.612404Z","caller":"traceutil/trace.go:172","msg":"trace[1640049380] transaction","detail":"{read_only:false; response_revision:441; number_of_response:1; }","duration":"236.765545ms","start":"2025-11-01T11:10:56.375627Z","end":"2025-11-01T11:10:56.612392Z","steps":["trace[1640049380] 'process raft request'  (duration: 235.410418ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T11:10:56.612968Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"205.124296ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:179"}
	{"level":"info","ts":"2025-11-01T11:10:56.613242Z","caller":"traceutil/trace.go:172","msg":"trace[513679193] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/coredns; range_end:; response_count:1; response_revision:441; }","duration":"205.553423ms","start":"2025-11-01T11:10:56.407675Z","end":"2025-11-01T11:10:56.613228Z","steps":["trace[513679193] 'agreement among raft nodes before linearized reading'  (duration: 204.330513ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T11:10:56.613745Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"171.219904ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T11:10:56.613863Z","caller":"traceutil/trace.go:172","msg":"trace[1570698760] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:441; }","duration":"171.342115ms","start":"2025-11-01T11:10:56.442511Z","end":"2025-11-01T11:10:56.613854Z","steps":["trace[1570698760] 'agreement among raft nodes before linearized reading'  (duration: 171.102772ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T11:10:56.614788Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"207.062273ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" limit:1 ","response":"range_response_count:1 size:185"}
	{"level":"info","ts":"2025-11-01T11:10:56.614818Z","caller":"traceutil/trace.go:172","msg":"trace[1663190627] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kube-proxy; range_end:; response_count:1; response_revision:441; }","duration":"207.096378ms","start":"2025-11-01T11:10:56.407714Z","end":"2025-11-01T11:10:56.614810Z","steps":["trace[1663190627] 'agreement among raft nodes before linearized reading'  (duration: 206.979438ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T11:10:56.906683Z","caller":"traceutil/trace.go:172","msg":"trace[1955872032] linearizableReadLoop","detail":"{readStateIndex:463; appliedIndex:463; }","duration":"268.835066ms","start":"2025-11-01T11:10:56.637823Z","end":"2025-11-01T11:10:56.906658Z","steps":["trace[1955872032] 'read index received'  (duration: 268.828946ms)","trace[1955872032] 'applied index is now lower than readState.Index'  (duration: 5.022µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T11:10:57.099542Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"461.674814ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:controller:endpointslicemirroring-controller\" limit:1 ","response":"range_response_count:1 size:850"}
	{"level":"info","ts":"2025-11-01T11:10:57.099651Z","caller":"traceutil/trace.go:172","msg":"trace[2068833302] range","detail":"{range_begin:/registry/clusterroles/system:controller:endpointslicemirroring-controller; range_end:; response_count:1; response_revision:441; }","duration":"461.841361ms","start":"2025-11-01T11:10:56.637793Z","end":"2025-11-01T11:10:57.099634Z","steps":["trace[2068833302] 'agreement among raft nodes before linearized reading'  (duration: 268.942825ms)","trace[2068833302] 'range keys from in-memory index tree'  (duration: 192.512829ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T11:10:57.099693Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T11:10:56.637778Z","time spent":"461.901172ms","remote":"127.0.0.1:38908","response type":"/etcdserverpb.KV/Range","request count":0,"request size":78,"response count":1,"response size":873,"request content":"key:\"/registry/clusterroles/system:controller:endpointslicemirroring-controller\" limit:1 "}
	{"level":"warn","ts":"2025-11-01T11:10:57.100188Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"192.882765ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13542493675358751828 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-proxy-pmht9\" mod_revision:403 > success:<request_put:<key:\"/registry/pods/kube-system/kube-proxy-pmht9\" value_size:5042 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-proxy-pmht9\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-01T11:10:57.100259Z","caller":"traceutil/trace.go:172","msg":"trace[1530526721] transaction","detail":"{read_only:false; response_revision:442; number_of_response:1; }","duration":"463.872975ms","start":"2025-11-01T11:10:56.636375Z","end":"2025-11-01T11:10:57.100248Z","steps":["trace[1530526721] 'process raft request'  (duration: 270.398206ms)","trace[1530526721] 'compare'  (duration: 192.45219ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T11:10:57.100382Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T11:10:56.636265Z","time spent":"464.082049ms","remote":"127.0.0.1:38536","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5093,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-proxy-pmht9\" mod_revision:403 > success:<request_put:<key:\"/registry/pods/kube-system/kube-proxy-pmht9\" value_size:5042 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-proxy-pmht9\" > >"}
	{"level":"info","ts":"2025-11-01T11:10:57.378672Z","caller":"traceutil/trace.go:172","msg":"trace[1675861580] linearizableReadLoop","detail":"{readStateIndex:464; appliedIndex:464; }","duration":"259.313066ms","start":"2025-11-01T11:10:57.119339Z","end":"2025-11-01T11:10:57.378652Z","steps":["trace[1675861580] 'read index received'  (duration: 259.308269ms)","trace[1675861580] 'applied index is now lower than readState.Index'  (duration: 4.282µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T11:10:57.635923Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"516.563842ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:controller:job-controller\" limit:1 ","response":"range_response_count:1 size:782"}
	{"level":"info","ts":"2025-11-01T11:10:57.636033Z","caller":"traceutil/trace.go:172","msg":"trace[1407849316] range","detail":"{range_begin:/registry/clusterroles/system:controller:job-controller; range_end:; response_count:1; response_revision:442; }","duration":"516.680219ms","start":"2025-11-01T11:10:57.119336Z","end":"2025-11-01T11:10:57.636016Z","steps":["trace[1407849316] 'agreement among raft nodes before linearized reading'  (duration: 260.14631ms)","trace[1407849316] 'range keys from in-memory index tree'  (duration: 256.334312ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T11:10:57.636067Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T11:10:57.119271Z","time spent":"516.788171ms","remote":"127.0.0.1:38908","response type":"/etcdserverpb.KV/Range","request count":0,"request size":59,"response count":1,"response size":805,"request content":"key:\"/registry/clusterroles/system:controller:job-controller\" limit:1 "}
	{"level":"warn","ts":"2025-11-01T11:10:57.635948Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"256.355126ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13542493675358751836 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-pause-112657\" mod_revision:435 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-pause-112657\" value_size:7165 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-pause-112657\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-01T11:10:57.636590Z","caller":"traceutil/trace.go:172","msg":"trace[1818149970] transaction","detail":"{read_only:false; response_revision:443; number_of_response:1; }","duration":"518.299723ms","start":"2025-11-01T11:10:57.118272Z","end":"2025-11-01T11:10:57.636572Z","steps":["trace[1818149970] 'process raft request'  (duration: 261.264621ms)","trace[1818149970] 'compare'  (duration: 256.266124ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T11:10:57.636666Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T11:10:57.118261Z","time spent":"518.360228ms","remote":"127.0.0.1:38536","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":7227,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-pause-112657\" mod_revision:435 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-pause-112657\" value_size:7165 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-pause-112657\" > >"}
	{"level":"info","ts":"2025-11-01T11:10:57.636726Z","caller":"traceutil/trace.go:172","msg":"trace[764704833] linearizableReadLoop","detail":"{readStateIndex:465; appliedIndex:464; }","duration":"193.739452ms","start":"2025-11-01T11:10:57.442975Z","end":"2025-11-01T11:10:57.636714Z","steps":["trace[764704833] 'read index received'  (duration: 72.933266ms)","trace[764704833] 'applied index is now lower than readState.Index'  (duration: 120.805311ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T11:10:57.636780Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"193.807452ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T11:10:57.636801Z","caller":"traceutil/trace.go:172","msg":"trace[1123814084] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:443; }","duration":"193.829915ms","start":"2025-11-01T11:10:57.442965Z","end":"2025-11-01T11:10:57.636795Z","steps":["trace[1123814084] 'agreement among raft nodes before linearized reading'  (duration: 193.787452ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T11:10:57.741377Z","caller":"traceutil/trace.go:172","msg":"trace[1415454920] transaction","detail":"{read_only:false; response_revision:444; number_of_response:1; }","duration":"101.677672ms","start":"2025-11-01T11:10:57.639684Z","end":"2025-11-01T11:10:57.741362Z","steps":["trace[1415454920] 'process raft request'  (duration: 96.44196ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:11:14 up 1 min,  0 users,  load average: 1.55, 0.63, 0.23
	Linux pause-112657 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Oct 28 16:58:05 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [f8e195bbfd8af4cc01974cac90a1813602ea186e4abea2dd378927416f5dc0b5] <==
	{"level":"warn","ts":"2025-11-01T11:10:44.109537Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00129a1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":84,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-01T11:10:44.137176Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00129a1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":85,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-01T11:10:44.163734Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00129a1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":86,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-01T11:10:44.190603Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00129a1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":87,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-01T11:10:44.216002Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00129a1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":88,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-01T11:10:44.242385Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00129a1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":89,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-01T11:10:44.268842Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00129a1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":90,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-01T11:10:44.295111Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00129a1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":91,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-01T11:10:44.320435Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00129a1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":92,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-01T11:10:44.346882Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00129a1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":93,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-01T11:10:44.371414Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00129a1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":94,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-01T11:10:44.399784Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00129a1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":95,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-01T11:10:44.425149Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00129a1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":96,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-01T11:10:44.450472Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00129a1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":97,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-01T11:10:44.477483Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00129a1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":98,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	{"level":"warn","ts":"2025-11-01T11:10:44.501272Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00129a1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":99,"error":"rpc error: code = Canceled desc = grpc: the client connection is closing"}
	E1101 11:10:44.501421       1 controller.go:97] Error removing old endpoints from kubernetes service: rpc error: code = Canceled desc = grpc: the client connection is closing
	W1101 11:10:44.588183       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1101 11:10:44.588271       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1101 11:10:45.589062       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1101 11:10:45.589826       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1101 11:10:46.588685       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1101 11:10:46.588944       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W1101 11:10:47.588832       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E1101 11:10:47.589143       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	
	
	==> kube-apiserver [f9ea1384edf60f795b201ee53bb5bb7090a53d71fa1667e6af09c1fdcfbe0740] <==
	I1101 11:10:55.372220       1 policy_source.go:240] refreshing policies
	I1101 11:10:55.372254       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 11:10:55.372389       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 11:10:55.382005       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 11:10:55.400024       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1101 11:10:55.402407       1 aggregator.go:171] initial CRD sync complete...
	I1101 11:10:55.402465       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 11:10:55.402484       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 11:10:55.402500       1 cache.go:39] Caches are synced for autoregister controller
	I1101 11:10:55.430690       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 11:10:55.435064       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 11:10:55.437003       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 11:10:55.437049       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 11:10:55.438706       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 11:10:55.449877       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 11:10:56.243882       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 11:10:56.616543       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1101 11:10:57.953634       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.83.133]
	I1101 11:10:57.955407       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 11:10:57.961748       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 11:10:58.172367       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 11:10:58.380039       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 11:10:58.442753       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 11:10:58.461768       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 11:11:00.394594       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [a5f4bd825d40113738943da3b6f7b2025cb9516c6519f050d0f26455627a6e67] <==
	I1101 11:10:39.853684       1 serving.go:386] Generated self-signed cert in-memory
	I1101 11:10:40.613436       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1101 11:10:40.613488       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 11:10:40.617489       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1101 11:10:40.617609       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1101 11:10:40.618086       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1101 11:10:40.618879       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [cd4cd50e19c80294183a84f1d6b4ba1319f1f8e73be2fdbbb0ea63eb0cfa3d1e] <==
	I1101 11:10:59.758672       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 11:10:59.758787       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 11:10:59.763160       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 11:10:59.763638       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 11:10:59.768635       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 11:10:59.771324       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 11:10:59.771786       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 11:10:59.771797       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 11:10:59.771808       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 11:10:59.773602       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 11:10:59.773718       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 11:10:59.775049       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 11:10:59.778758       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 11:10:59.778866       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 11:10:59.783366       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 11:10:59.789944       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 11:10:59.789993       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 11:10:59.790010       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 11:10:59.798199       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 11:10:59.804256       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1101 11:10:59.813658       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 11:10:59.819200       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 11:10:59.819485       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 11:10:59.819577       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-112657"
	I1101 11:10:59.819644       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	
	
	==> kube-proxy [8cac02c746090b038b03b7e3666ef90276d94f274eb8926c189d832b02e7d27b] <==
	E1101 11:10:50.381399       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-112657&limit=500&resourceVersion=0\": dial tcp 192.168.83.133:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1101 11:10:57.303773       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 11:10:57.303845       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.83.133"]
	E1101 11:10:57.304004       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 11:10:57.352945       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1101 11:10:57.353057       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 11:10:57.353081       1 server_linux.go:132] "Using iptables Proxier"
	I1101 11:10:57.367352       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 11:10:57.367924       1 server.go:527] "Version info" version="v1.34.1"
	I1101 11:10:57.368077       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 11:10:57.376049       1 config.go:200] "Starting service config controller"
	I1101 11:10:57.376086       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 11:10:57.376106       1 config.go:106] "Starting endpoint slice config controller"
	I1101 11:10:57.376111       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 11:10:57.376126       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 11:10:57.376133       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 11:10:57.376625       1 config.go:309] "Starting node config controller"
	I1101 11:10:57.376662       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 11:10:57.376670       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 11:10:57.476780       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 11:10:57.476808       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 11:10:57.476831       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [b140a0c1d767d4f7ece6853aeb5d9d8f8f58e137400cc9bf3910f496e71c1b79] <==
	I1101 11:10:01.891038       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 11:10:01.991370       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 11:10:01.991407       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.83.133"]
	E1101 11:10:01.991481       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 11:10:02.042143       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1101 11:10:02.042265       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 11:10:02.042436       1 server_linux.go:132] "Using iptables Proxier"
	I1101 11:10:02.061140       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 11:10:02.061880       1 server.go:527] "Version info" version="v1.34.1"
	I1101 11:10:02.062109       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 11:10:02.069940       1 config.go:200] "Starting service config controller"
	I1101 11:10:02.069952       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 11:10:02.069970       1 config.go:106] "Starting endpoint slice config controller"
	I1101 11:10:02.069973       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 11:10:02.069984       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 11:10:02.069988       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 11:10:02.073780       1 config.go:309] "Starting node config controller"
	I1101 11:10:02.073966       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 11:10:02.170872       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 11:10:02.170907       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 11:10:02.170957       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 11:10:02.175366       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [4f557d4e8f14008c0f3af610b5e7d21f6bc34a9ef9b305c98652539ec8b3a059] <==
	E1101 11:10:43.412977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.83.133:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.83.133:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 11:10:44.537003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.83.133:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.83.133:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 11:10:44.582046       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.83.133:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.83.133:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 11:10:44.785423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.83.133:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.83.133:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 11:10:44.835125       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.83.133:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.83.133:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 11:10:45.010711       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.83.133:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.83.133:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 11:10:45.382392       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.83.133:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.83.133:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 11:10:45.461215       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.83.133:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.83.133:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 11:10:45.512442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.83.133:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.83.133:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 11:10:45.520227       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.83.133:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.83.133:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 11:10:45.598746       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.83.133:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.83.133:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 11:10:45.618051       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.83.133:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.83.133:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 11:10:45.636982       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.83.133:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.83.133:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 11:10:45.773890       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.83.133:8443/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.83.133:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 11:10:45.798774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.83.133:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.83.133:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 11:10:45.837105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.83.133:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.83.133:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 11:10:46.003678       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.83.133:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.83.133:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 11:10:46.148137       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.83.133:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.83.133:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 11:10:46.500604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.83.133:8443/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.83.133:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 11:10:48.469482       1 server.go:286] "handlers are not fully synchronized" err="context canceled"
	I1101 11:10:48.469970       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1101 11:10:48.470024       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1101 11:10:48.470089       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 11:10:48.470186       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1101 11:10:48.470205       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a4aa50e95ae4ca4ff8424b31d4eed01ebd730345a23823c47c6c7c5d5f53b248] <==
	I1101 11:10:53.901490       1 serving.go:386] Generated self-signed cert in-memory
	I1101 11:10:55.417681       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 11:10:55.417843       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 11:10:55.423396       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 11:10:55.423439       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 11:10:55.423496       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 11:10:55.423523       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 11:10:55.423537       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 11:10:55.423542       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 11:10:55.423864       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 11:10:55.423942       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 11:10:55.524731       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 11:10:55.524736       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 11:10:55.524804       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 11:10:52 pause-112657 kubelet[3528]: E1101 11:10:52.555157    3528 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-112657\" not found" node="pause-112657"
	Nov 01 11:10:52 pause-112657 kubelet[3528]: E1101 11:10:52.557865    3528 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-112657\" not found" node="pause-112657"
	Nov 01 11:10:53 pause-112657 kubelet[3528]: E1101 11:10:53.560973    3528 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-112657\" not found" node="pause-112657"
	Nov 01 11:10:53 pause-112657 kubelet[3528]: E1101 11:10:53.562415    3528 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-112657\" not found" node="pause-112657"
	Nov 01 11:10:53 pause-112657 kubelet[3528]: E1101 11:10:53.562934    3528 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-112657\" not found" node="pause-112657"
	Nov 01 11:10:55 pause-112657 kubelet[3528]: I1101 11:10:55.432389    3528 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-112657"
	Nov 01 11:10:55 pause-112657 kubelet[3528]: E1101 11:10:55.456426    3528 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-112657\" already exists" pod="kube-system/kube-apiserver-pause-112657"
	Nov 01 11:10:55 pause-112657 kubelet[3528]: I1101 11:10:55.456461    3528 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-112657"
	Nov 01 11:10:55 pause-112657 kubelet[3528]: E1101 11:10:55.475076    3528 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-112657\" already exists" pod="kube-system/kube-controller-manager-pause-112657"
	Nov 01 11:10:55 pause-112657 kubelet[3528]: I1101 11:10:55.475212    3528 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-112657"
	Nov 01 11:10:55 pause-112657 kubelet[3528]: I1101 11:10:55.487624    3528 kubelet_node_status.go:124] "Node was previously registered" node="pause-112657"
	Nov 01 11:10:55 pause-112657 kubelet[3528]: I1101 11:10:55.487814    3528 kubelet_node_status.go:78] "Successfully registered node" node="pause-112657"
	Nov 01 11:10:55 pause-112657 kubelet[3528]: I1101 11:10:55.487871    3528 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 01 11:10:55 pause-112657 kubelet[3528]: I1101 11:10:55.490573    3528 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 01 11:10:55 pause-112657 kubelet[3528]: E1101 11:10:55.492976    3528 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-112657\" already exists" pod="kube-system/kube-scheduler-pause-112657"
	Nov 01 11:10:55 pause-112657 kubelet[3528]: I1101 11:10:55.493000    3528 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-112657"
	Nov 01 11:10:55 pause-112657 kubelet[3528]: E1101 11:10:55.506981    3528 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-112657\" already exists" pod="kube-system/etcd-pause-112657"
	Nov 01 11:10:56 pause-112657 kubelet[3528]: I1101 11:10:56.298792    3528 apiserver.go:52] "Watching apiserver"
	Nov 01 11:10:56 pause-112657 kubelet[3528]: I1101 11:10:56.332617    3528 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 01 11:10:56 pause-112657 kubelet[3528]: I1101 11:10:56.402690    3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93cedff1-d264-4c71-af06-95e4b53e637e-xtables-lock\") pod \"kube-proxy-pmht9\" (UID: \"93cedff1-d264-4c71-af06-95e4b53e637e\") " pod="kube-system/kube-proxy-pmht9"
	Nov 01 11:10:56 pause-112657 kubelet[3528]: I1101 11:10:56.403733    3528 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93cedff1-d264-4c71-af06-95e4b53e637e-lib-modules\") pod \"kube-proxy-pmht9\" (UID: \"93cedff1-d264-4c71-af06-95e4b53e637e\") " pod="kube-system/kube-proxy-pmht9"
	Nov 01 11:11:00 pause-112657 kubelet[3528]: E1101 11:11:00.532477    3528 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761995460531637145  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Nov 01 11:11:00 pause-112657 kubelet[3528]: E1101 11:11:00.532540    3528 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761995460531637145  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Nov 01 11:11:10 pause-112657 kubelet[3528]: E1101 11:11:10.535104    3528 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761995470533878361  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Nov 01 11:11:10 pause-112657 kubelet[3528]: E1101 11:11:10.535205    3528 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761995470533878361  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-112657 -n pause-112657
helpers_test.go:269: (dbg) Run:  kubectl --context pause-112657 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (67.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-jt94t" [797f79dc-31d4-4da5-af7c-2b7c3c4d804b] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:272: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-287419 -n default-k8s-diff-port-287419
start_stop_delete_test.go:272: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-11-01 11:28:14.578140723 +0000 UTC m=+5909.320348701
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context default-k8s-diff-port-287419 describe po kubernetes-dashboard-855c9754f9-jt94t -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context default-k8s-diff-port-287419 describe po kubernetes-dashboard-855c9754f9-jt94t -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-jt94t
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             default-k8s-diff-port-287419/192.168.72.189
Start Time:       Sat, 01 Nov 2025 11:19:10 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vgm2k (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-vgm2k:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age                    From               Message
----     ------            ----                   ----               -------
Warning  FailedScheduling  9m8s                   default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
Normal   Scheduled         9m4s                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jt94t to default-k8s-diff-port-287419
Warning  Failed            6m42s (x2 over 8m26s)  kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling           4m2s (x5 over 9m4s)    kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed            3m31s (x5 over 8m26s)  kubelet            Error: ErrImagePull
Warning  Failed            3m31s (x3 over 7m40s)  kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed            2m8s (x16 over 8m25s)  kubelet            Error: ImagePullBackOff
Normal   BackOff           61s (x21 over 8m25s)   kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context default-k8s-diff-port-287419 logs kubernetes-dashboard-855c9754f9-jt94t -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-287419 logs kubernetes-dashboard-855c9754f9-jt94t -n kubernetes-dashboard: exit status 1 (74.257577ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-jt94t" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context default-k8s-diff-port-287419 logs kubernetes-dashboard-855c9754f9-jt94t -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-287419 -n default-k8s-diff-port-287419
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-287419 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-287419 logs -n 25: (1.347159688s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                    ARGS                                                                                                                     │      PROFILE       │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p newest-cni-268638 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-268638  │ jenkins │ v1.37.0 │ 01 Nov 25 11:18 UTC │ 01 Nov 25 11:19 UTC │
	│ image   │ no-preload-294319 image list --format=json                                                                                                                                                                                                  │ no-preload-294319  │ jenkins │ v1.37.0 │ 01 Nov 25 11:18 UTC │ 01 Nov 25 11:18 UTC │
	│ pause   │ -p no-preload-294319 --alsologtostderr -v=1                                                                                                                                                                                                 │ no-preload-294319  │ jenkins │ v1.37.0 │ 01 Nov 25 11:18 UTC │ 01 Nov 25 11:19 UTC │
	│ unpause │ -p no-preload-294319 --alsologtostderr -v=1                                                                                                                                                                                                 │ no-preload-294319  │ jenkins │ v1.37.0 │ 01 Nov 25 11:19 UTC │ 01 Nov 25 11:19 UTC │
	│ delete  │ -p no-preload-294319                                                                                                                                                                                                                        │ no-preload-294319  │ jenkins │ v1.37.0 │ 01 Nov 25 11:19 UTC │ 01 Nov 25 11:19 UTC │
	│ delete  │ -p no-preload-294319                                                                                                                                                                                                                        │ no-preload-294319  │ jenkins │ v1.37.0 │ 01 Nov 25 11:19 UTC │ 01 Nov 25 11:19 UTC │
	│ ssh     │ guest-290834 ssh df -t ext4 /data | grep /data                                                                                                                                                                                              │ guest-290834       │ jenkins │ v1.37.0 │ 01 Nov 25 11:19 UTC │ 01 Nov 25 11:19 UTC │
	│ ssh     │ guest-290834 ssh df -t ext4 /var/lib/minikube | grep /var/lib/minikube                                                                                                                                                                      │ guest-290834       │ jenkins │ v1.37.0 │ 01 Nov 25 11:19 UTC │ 01 Nov 25 11:19 UTC │
	│ ssh     │ guest-290834 ssh df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker                                                                                                                                                                │ guest-290834       │ jenkins │ v1.37.0 │ 01 Nov 25 11:19 UTC │ 01 Nov 25 11:19 UTC │
	│ ssh     │ guest-290834 ssh df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox                                                                                                                                                                        │ guest-290834       │ jenkins │ v1.37.0 │ 01 Nov 25 11:19 UTC │ 01 Nov 25 11:19 UTC │
	│ ssh     │ guest-290834 ssh df -t ext4 /var/lib/cni | grep /var/lib/cni                                                                                                                                                                                │ guest-290834       │ jenkins │ v1.37.0 │ 01 Nov 25 11:19 UTC │ 01 Nov 25 11:19 UTC │
	│ ssh     │ guest-290834 ssh df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet                                                                                                                                                                        │ guest-290834       │ jenkins │ v1.37.0 │ 01 Nov 25 11:19 UTC │ 01 Nov 25 11:19 UTC │
	│ ssh     │ guest-290834 ssh df -t ext4 /var/lib/docker | grep /var/lib/docker                                                                                                                                                                          │ guest-290834       │ jenkins │ v1.37.0 │ 01 Nov 25 11:19 UTC │ 01 Nov 25 11:19 UTC │
	│ ssh     │ guest-290834 ssh test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'                                                                                                                                                           │ guest-290834       │ jenkins │ v1.37.0 │ 01 Nov 25 11:19 UTC │ 01 Nov 25 11:19 UTC │
	│ delete  │ -p guest-290834                                                                                                                                                                                                                             │ guest-290834       │ jenkins │ v1.37.0 │ 01 Nov 25 11:19 UTC │ 01 Nov 25 11:19 UTC │
	│ image   │ embed-certs-571864 image list --format=json                                                                                                                                                                                                 │ embed-certs-571864 │ jenkins │ v1.37.0 │ 01 Nov 25 11:19 UTC │ 01 Nov 25 11:19 UTC │
	│ pause   │ -p embed-certs-571864 --alsologtostderr -v=1                                                                                                                                                                                                │ embed-certs-571864 │ jenkins │ v1.37.0 │ 01 Nov 25 11:19 UTC │ 01 Nov 25 11:19 UTC │
	│ unpause │ -p embed-certs-571864 --alsologtostderr -v=1                                                                                                                                                                                                │ embed-certs-571864 │ jenkins │ v1.37.0 │ 01 Nov 25 11:19 UTC │ 01 Nov 25 11:19 UTC │
	│ delete  │ -p embed-certs-571864                                                                                                                                                                                                                       │ embed-certs-571864 │ jenkins │ v1.37.0 │ 01 Nov 25 11:19 UTC │ 01 Nov 25 11:19 UTC │
	│ delete  │ -p embed-certs-571864                                                                                                                                                                                                                       │ embed-certs-571864 │ jenkins │ v1.37.0 │ 01 Nov 25 11:19 UTC │ 01 Nov 25 11:19 UTC │
	│ image   │ newest-cni-268638 image list --format=json                                                                                                                                                                                                  │ newest-cni-268638  │ jenkins │ v1.37.0 │ 01 Nov 25 11:19 UTC │ 01 Nov 25 11:19 UTC │
	│ pause   │ -p newest-cni-268638 --alsologtostderr -v=1                                                                                                                                                                                                 │ newest-cni-268638  │ jenkins │ v1.37.0 │ 01 Nov 25 11:19 UTC │ 01 Nov 25 11:19 UTC │
	│ unpause │ -p newest-cni-268638 --alsologtostderr -v=1                                                                                                                                                                                                 │ newest-cni-268638  │ jenkins │ v1.37.0 │ 01 Nov 25 11:19 UTC │ 01 Nov 25 11:19 UTC │
	│ delete  │ -p newest-cni-268638                                                                                                                                                                                                                        │ newest-cni-268638  │ jenkins │ v1.37.0 │ 01 Nov 25 11:19 UTC │ 01 Nov 25 11:19 UTC │
	│ delete  │ -p newest-cni-268638                                                                                                                                                                                                                        │ newest-cni-268638  │ jenkins │ v1.37.0 │ 01 Nov 25 11:19 UTC │ 01 Nov 25 11:19 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 11:18:26
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 11:18:26.575966  119309 out.go:360] Setting OutFile to fd 1 ...
	I1101 11:18:26.576303  119309 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:18:26.576316  119309 out.go:374] Setting ErrFile to fd 2...
	I1101 11:18:26.576323  119309 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:18:26.576668  119309 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-70113/.minikube/bin
	I1101 11:18:26.577276  119309 out.go:368] Setting JSON to false
	I1101 11:18:26.578558  119309 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10855,"bootTime":1761985052,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 11:18:26.578686  119309 start.go:143] virtualization: kvm guest
	I1101 11:18:26.581032  119309 out.go:179] * [newest-cni-268638] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 11:18:26.582374  119309 notify.go:221] Checking for updates...
	I1101 11:18:26.582382  119309 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 11:18:26.584687  119309 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 11:18:26.586092  119309 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-70113/kubeconfig
	I1101 11:18:26.590942  119309 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-70113/.minikube
	I1101 11:18:26.592615  119309 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 11:18:26.593782  119309 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 11:18:26.595639  119309 config.go:182] Loaded profile config "newest-cni-268638": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:18:26.596410  119309 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 11:18:26.650853  119309 out.go:179] * Using the kvm2 driver based on existing profile
	I1101 11:18:26.653013  119309 start.go:309] selected driver: kvm2
	I1101 11:18:26.653037  119309 start.go:930] validating driver "kvm2" against &{Name:newest-cni-268638 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:newest-cni-268638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.241 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s
ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:18:26.653229  119309 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 11:18:26.654941  119309 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 11:18:26.655015  119309 cni.go:84] Creating CNI manager for ""
	I1101 11:18:26.655102  119309 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 11:18:26.655172  119309 start.go:353] cluster config:
	{Name:newest-cni-268638 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-268638 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.241 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeR
equested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:18:26.655313  119309 iso.go:125] acquiring lock: {Name:mk49d9a272bb99d336f82dfc5631a4c8ce9271c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 11:18:26.657121  119309 out.go:179] * Starting "newest-cni-268638" primary control-plane node in "newest-cni-268638" cluster
	I1101 11:18:24.257509  118233 cri.go:89] found id: ""
	I1101 11:18:24.257589  118233 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 11:18:24.282166  118233 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 11:18:24.282196  118233 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 11:18:24.282259  118233 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 11:18:24.297617  118233 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 11:18:24.298262  118233 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-294319" does not appear in /home/jenkins/minikube-integration/21830-70113/kubeconfig
	I1101 11:18:24.298591  118233 kubeconfig.go:62] /home/jenkins/minikube-integration/21830-70113/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-294319" cluster setting kubeconfig missing "no-preload-294319" context setting]
	I1101 11:18:24.299168  118233 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/kubeconfig: {Name:mk1fa0677a0758214359bfdd6f326495ee5fd60e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:18:24.300823  118233 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 11:18:24.320723  118233 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.49
	I1101 11:18:24.320766  118233 kubeadm.go:1161] stopping kube-system containers ...
	I1101 11:18:24.320783  118233 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1101 11:18:24.320845  118233 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 11:18:24.394074  118233 cri.go:89] found id: ""
	I1101 11:18:24.394163  118233 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 11:18:24.421617  118233 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 11:18:24.435632  118233 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 11:18:24.435657  118233 kubeadm.go:158] found existing configuration files:
	
	I1101 11:18:24.435708  118233 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 11:18:24.454470  118233 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 11:18:24.454579  118233 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 11:18:24.473401  118233 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 11:18:24.492090  118233 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 11:18:24.492178  118233 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 11:18:24.509757  118233 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 11:18:24.527399  118233 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 11:18:24.527492  118233 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 11:18:24.544597  118233 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 11:18:24.558312  118233 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 11:18:24.558380  118233 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 11:18:24.575629  118233 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 11:18:24.590163  118233 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:18:24.768738  118233 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:18:26.498750  118233 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.729965792s)
	I1101 11:18:26.498832  118233 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:18:26.884044  118233 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:18:27.035583  118233 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:18:27.219246  118233 api_server.go:52] waiting for apiserver process to appear ...
	I1101 11:18:27.219341  118233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:18:27.719611  118233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:18:28.219499  118233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:18:28.720342  118233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:18:28.777475  118233 api_server.go:72] duration metric: took 1.558241424s to wait for apiserver process to appear ...
	I1101 11:18:28.777506  118233 api_server.go:88] waiting for apiserver healthz status ...
	I1101 11:18:28.777527  118233 api_server.go:253] Checking apiserver healthz at https://192.168.39.49:8443/healthz ...
	I1101 11:18:28.779007  118233 api_server.go:269] stopped: https://192.168.39.49:8443/healthz: Get "https://192.168.39.49:8443/healthz": dial tcp 192.168.39.49:8443: connect: connection refused
	I1101 11:18:25.532670  118797 main.go:143] libmachine: domain embed-certs-571864 has defined MAC address 52:54:00:d1:0a:19 in network mk-embed-certs-571864
	I1101 11:18:25.533256  118797 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d1:0a:19", ip: ""} in network mk-embed-certs-571864: {Iface:virbr3 ExpiryTime:2025-11-01 12:18:17 +0000 UTC Type:0 Mac:52:54:00:d1:0a:19 Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:embed-certs-571864 Clientid:01:52:54:00:d1:0a:19}
	I1101 11:18:25.533311  118797 main.go:143] libmachine: domain embed-certs-571864 has defined IP address 192.168.61.132 and MAC address 52:54:00:d1:0a:19 in network mk-embed-certs-571864
	I1101 11:18:25.533576  118797 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1101 11:18:25.538934  118797 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 11:18:25.557628  118797 kubeadm.go:884] updating cluster {Name:embed-certs-571864 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.34.1 ClusterName:embed-certs-571864 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.132 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNode
Requested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 11:18:25.557794  118797 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 11:18:25.557859  118797 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 11:18:25.610123  118797 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1101 11:18:25.610214  118797 ssh_runner.go:195] Run: which lz4
	I1101 11:18:25.615610  118797 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1101 11:18:25.621258  118797 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 11:18:25.621295  118797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1101 11:18:27.537982  118797 crio.go:462] duration metric: took 1.922402517s to copy over tarball
	I1101 11:18:27.538059  118797 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 11:18:29.590006  118797 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.051908617s)
	I1101 11:18:29.590067  118797 crio.go:469] duration metric: took 2.052053158s to extract the tarball
	I1101 11:18:29.590078  118797 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 11:18:29.645938  118797 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 11:18:29.707543  118797 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 11:18:29.707578  118797 cache_images.go:86] Images are preloaded, skipping loading
	I1101 11:18:29.707590  118797 kubeadm.go:935] updating node { 192.168.61.132 8443 v1.34.1 crio true true} ...
	I1101 11:18:29.707732  118797 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-571864 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.132
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-571864 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 11:18:29.707844  118797 ssh_runner.go:195] Run: crio config
	I1101 11:18:29.778473  118797 cni.go:84] Creating CNI manager for ""
	I1101 11:18:29.778498  118797 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 11:18:29.778515  118797 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 11:18:29.778561  118797 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.132 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-571864 NodeName:embed-certs-571864 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.132"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.132 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 11:18:29.778754  118797 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.132
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-571864"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.132"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.132"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 11:18:29.778834  118797 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 11:18:29.793364  118797 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 11:18:29.793443  118797 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 11:18:29.811009  118797 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1101 11:18:29.843479  118797 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 11:18:29.876040  118797 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1101 11:18:29.903565  118797 ssh_runner.go:195] Run: grep 192.168.61.132	control-plane.minikube.internal$ /etc/hosts
	I1101 11:18:29.908600  118797 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.132	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 11:18:29.932848  118797 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:18:28.216807  119092 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.72.189:22: connect: no route to host
	I1101 11:18:26.658521  119309 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 11:18:26.658591  119309 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-70113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 11:18:26.658604  119309 cache.go:59] Caching tarball of preloaded images
	I1101 11:18:26.658730  119309 preload.go:233] Found /home/jenkins/minikube-integration/21830-70113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 11:18:26.658752  119309 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 11:18:26.658903  119309 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/newest-cni-268638/config.json ...
	I1101 11:18:26.659211  119309 start.go:360] acquireMachinesLock for newest-cni-268638: {Name:mk53a05d125fe91ead2a39c6bbf2ba926c471e2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 11:18:29.277988  118233 api_server.go:253] Checking apiserver healthz at https://192.168.39.49:8443/healthz ...
	I1101 11:18:31.834410  118233 api_server.go:279] https://192.168.39.49:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 11:18:31.834451  118233 api_server.go:103] status: https://192.168.39.49:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 11:18:31.834472  118233 api_server.go:253] Checking apiserver healthz at https://192.168.39.49:8443/healthz ...
	I1101 11:18:31.894951  118233 api_server.go:279] https://192.168.39.49:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 11:18:31.894986  118233 api_server.go:103] status: https://192.168.39.49:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 11:18:32.278625  118233 api_server.go:253] Checking apiserver healthz at https://192.168.39.49:8443/healthz ...
	I1101 11:18:32.288324  118233 api_server.go:279] https://192.168.39.49:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:18:32.288353  118233 api_server.go:103] status: https://192.168.39.49:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:18:32.777914  118233 api_server.go:253] Checking apiserver healthz at https://192.168.39.49:8443/healthz ...
	I1101 11:18:32.783509  118233 api_server.go:279] https://192.168.39.49:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:18:32.783554  118233 api_server.go:103] status: https://192.168.39.49:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:18:33.278329  118233 api_server.go:253] Checking apiserver healthz at https://192.168.39.49:8443/healthz ...
	I1101 11:18:33.284284  118233 api_server.go:279] https://192.168.39.49:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:18:33.284322  118233 api_server.go:103] status: https://192.168.39.49:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:18:33.778876  118233 api_server.go:253] Checking apiserver healthz at https://192.168.39.49:8443/healthz ...
	I1101 11:18:33.784229  118233 api_server.go:279] https://192.168.39.49:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:18:33.784328  118233 api_server.go:103] status: https://192.168.39.49:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:18:34.277929  118233 api_server.go:253] Checking apiserver healthz at https://192.168.39.49:8443/healthz ...
	I1101 11:18:34.287409  118233 api_server.go:279] https://192.168.39.49:8443/healthz returned 200:
	ok
	I1101 11:18:34.297825  118233 api_server.go:141] control plane version: v1.34.1
	I1101 11:18:34.297862  118233 api_server.go:131] duration metric: took 5.520347755s to wait for apiserver health ...
	I1101 11:18:34.297878  118233 cni.go:84] Creating CNI manager for ""
	I1101 11:18:34.297888  118233 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 11:18:34.299237  118233 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 11:18:34.300556  118233 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 11:18:34.322317  118233 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1101 11:18:34.361848  118233 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 11:18:34.368029  118233 system_pods.go:59] 8 kube-system pods found
	I1101 11:18:34.368077  118233 system_pods.go:61] "coredns-66bc5c9577-x57vz" [eb2f3b71-41f2-4ae3-ac71-9ccc871abfc8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 11:18:34.368091  118233 system_pods.go:61] "etcd-no-preload-294319" [f4aadb8a-a6a7-4936-98fa-6e662ff2471d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 11:18:34.368112  118233 system_pods.go:61] "kube-apiserver-no-preload-294319" [fe68f1cd-151d-472c-955d-6c425117c91d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 11:18:34.368123  118233 system_pods.go:61] "kube-controller-manager-no-preload-294319" [efb452de-2f7a-4212-96c5-e5a8780b7694] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 11:18:34.368145  118233 system_pods.go:61] "kube-proxy-2qfgw" [f2d91d64-ec0c-45bf-bf3d-23b5dd8a78e4] Running
	I1101 11:18:34.368154  118233 system_pods.go:61] "kube-scheduler-no-preload-294319" [ec579a88-3103-48ff-b1cf-3463d6080e8a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 11:18:34.368167  118233 system_pods.go:61] "metrics-server-746fcd58dc-dn4qd" [27a30dc7-b5c2-4eae-979d-72266debe708] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 11:18:34.368182  118233 system_pods.go:61] "storage-provisioner" [3af75b2c-851c-4786-8aab-77980cca46b5] Running
	I1101 11:18:34.368192  118233 system_pods.go:74] duration metric: took 6.314947ms to wait for pod list to return data ...
	I1101 11:18:34.368208  118233 node_conditions.go:102] verifying NodePressure condition ...
	I1101 11:18:34.375580  118233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 11:18:34.375616  118233 node_conditions.go:123] node cpu capacity is 2
	I1101 11:18:34.375634  118233 node_conditions.go:105] duration metric: took 7.419177ms to run NodePressure ...
	I1101 11:18:34.375700  118233 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:18:34.745366  118233 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1101 11:18:34.751497  118233 kubeadm.go:744] kubelet initialised
	I1101 11:18:34.751551  118233 kubeadm.go:745] duration metric: took 6.134966ms waiting for restarted kubelet to initialise ...
	I1101 11:18:34.751577  118233 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 11:18:34.778054  118233 ops.go:34] apiserver oom_adj: -16
	I1101 11:18:34.778088  118233 kubeadm.go:602] duration metric: took 10.495882668s to restartPrimaryControlPlane
	I1101 11:18:34.778100  118233 kubeadm.go:403] duration metric: took 10.586894339s to StartCluster
	I1101 11:18:34.778122  118233 settings.go:142] acquiring lock: {Name:mk26e3d3b2448df59827bb1be60cde1d117dbc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:18:34.778205  118233 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-70113/kubeconfig
	I1101 11:18:34.779356  118233 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/kubeconfig: {Name:mk1fa0677a0758214359bfdd6f326495ee5fd60e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:18:34.779671  118233 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.49 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 11:18:34.779963  118233 config.go:182] Loaded profile config "no-preload-294319": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:18:34.780027  118233 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 11:18:34.780110  118233 addons.go:70] Setting storage-provisioner=true in profile "no-preload-294319"
	I1101 11:18:34.780146  118233 addons.go:239] Setting addon storage-provisioner=true in "no-preload-294319"
	W1101 11:18:34.780154  118233 addons.go:248] addon storage-provisioner should already be in state true
	I1101 11:18:34.780180  118233 host.go:66] Checking if "no-preload-294319" exists ...
	I1101 11:18:34.780204  118233 addons.go:70] Setting default-storageclass=true in profile "no-preload-294319"
	I1101 11:18:34.780226  118233 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-294319"
	I1101 11:18:34.780235  118233 addons.go:70] Setting dashboard=true in profile "no-preload-294319"
	I1101 11:18:34.780259  118233 addons.go:239] Setting addon dashboard=true in "no-preload-294319"
	W1101 11:18:34.780269  118233 addons.go:248] addon dashboard should already be in state true
	I1101 11:18:34.780271  118233 addons.go:70] Setting metrics-server=true in profile "no-preload-294319"
	I1101 11:18:34.780289  118233 addons.go:239] Setting addon metrics-server=true in "no-preload-294319"
	W1101 11:18:34.780296  118233 addons.go:248] addon metrics-server should already be in state true
	I1101 11:18:34.780299  118233 host.go:66] Checking if "no-preload-294319" exists ...
	I1101 11:18:34.780317  118233 host.go:66] Checking if "no-preload-294319" exists ...
	I1101 11:18:34.781686  118233 out.go:179] * Verifying Kubernetes components...
	I1101 11:18:34.783157  118233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:18:34.784689  118233 addons.go:239] Setting addon default-storageclass=true in "no-preload-294319"
	W1101 11:18:34.784710  118233 addons.go:248] addon default-storageclass should already be in state true
	I1101 11:18:34.784734  118233 host.go:66] Checking if "no-preload-294319" exists ...
	I1101 11:18:34.786058  118233 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 11:18:34.786098  118233 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1101 11:18:34.787053  118233 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 11:18:34.787074  118233 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 11:18:34.787211  118233 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 11:18:34.788053  118233 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 11:18:34.788073  118233 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 11:18:34.788258  118233 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 11:18:34.788276  118233 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 11:18:34.790158  118233 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 11:18:30.080717  118797 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:18:30.117878  118797 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/embed-certs-571864 for IP: 192.168.61.132
	I1101 11:18:30.117910  118797 certs.go:195] generating shared ca certs ...
	I1101 11:18:30.117933  118797 certs.go:227] acquiring lock for ca certs: {Name:mk20731b316fbc22c351241cefc40924880eeba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:18:30.118138  118797 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-70113/.minikube/ca.key
	I1101 11:18:30.118199  118797 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.key
	I1101 11:18:30.118214  118797 certs.go:257] generating profile certs ...
	I1101 11:18:30.118347  118797 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/embed-certs-571864/client.key
	I1101 11:18:30.118456  118797 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/embed-certs-571864/apiserver.key.883be73b
	I1101 11:18:30.118556  118797 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/embed-certs-571864/proxy-client.key
	I1101 11:18:30.118806  118797 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/73998.pem (1338 bytes)
	W1101 11:18:30.118861  118797 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-70113/.minikube/certs/73998_empty.pem, impossibly tiny 0 bytes
	I1101 11:18:30.118874  118797 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 11:18:30.118911  118797 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem (1082 bytes)
	I1101 11:18:30.118950  118797 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/cert.pem (1123 bytes)
	I1101 11:18:30.118990  118797 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/key.pem (1675 bytes)
	I1101 11:18:30.119080  118797 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/files/etc/ssl/certs/739982.pem (1708 bytes)
	I1101 11:18:30.120035  118797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 11:18:30.179115  118797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 11:18:30.223455  118797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 11:18:30.260204  118797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 11:18:30.299705  118797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/embed-certs-571864/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1101 11:18:30.343072  118797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/embed-certs-571864/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 11:18:30.387777  118797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/embed-certs-571864/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 11:18:30.437828  118797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/embed-certs-571864/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 11:18:30.483847  118797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/certs/73998.pem --> /usr/share/ca-certificates/73998.pem (1338 bytes)
	I1101 11:18:30.522708  118797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/files/etc/ssl/certs/739982.pem --> /usr/share/ca-certificates/739982.pem (1708 bytes)
	I1101 11:18:30.568043  118797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 11:18:30.610967  118797 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 11:18:30.638495  118797 ssh_runner.go:195] Run: openssl version
	I1101 11:18:30.646344  118797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/73998.pem && ln -fs /usr/share/ca-certificates/73998.pem /etc/ssl/certs/73998.pem"
	I1101 11:18:30.665487  118797 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/73998.pem
	I1101 11:18:30.673863  118797 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:03 /usr/share/ca-certificates/73998.pem
	I1101 11:18:30.673935  118797 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/73998.pem
	I1101 11:18:30.685608  118797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/73998.pem /etc/ssl/certs/51391683.0"
	I1101 11:18:30.702888  118797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/739982.pem && ln -fs /usr/share/ca-certificates/739982.pem /etc/ssl/certs/739982.pem"
	I1101 11:18:30.724178  118797 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/739982.pem
	I1101 11:18:30.732804  118797 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:03 /usr/share/ca-certificates/739982.pem
	I1101 11:18:30.732878  118797 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/739982.pem
	I1101 11:18:30.744302  118797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/739982.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 11:18:30.764295  118797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 11:18:30.780037  118797 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:18:30.788009  118797 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:50 /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:18:30.788096  118797 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:18:30.796430  118797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 11:18:30.820463  118797 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 11:18:30.829981  118797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 11:18:30.844019  118797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 11:18:30.859517  118797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 11:18:30.872995  118797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 11:18:30.885149  118797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 11:18:30.895855  118797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 11:18:30.909720  118797 kubeadm.go:401] StartCluster: {Name:embed-certs-571864 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.1 ClusterName:embed-certs-571864 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.132 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:18:30.909846  118797 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 11:18:30.909940  118797 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 11:18:30.966235  118797 cri.go:89] found id: ""
	I1101 11:18:30.966330  118797 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 11:18:30.984720  118797 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 11:18:30.984748  118797 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 11:18:30.984851  118797 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 11:18:30.999216  118797 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 11:18:30.999964  118797 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-571864" does not appear in /home/jenkins/minikube-integration/21830-70113/kubeconfig
	I1101 11:18:31.000276  118797 kubeconfig.go:62] /home/jenkins/minikube-integration/21830-70113/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-571864" cluster setting kubeconfig missing "embed-certs-571864" context setting]
	I1101 11:18:31.000880  118797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/kubeconfig: {Name:mk1fa0677a0758214359bfdd6f326495ee5fd60e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:18:31.074119  118797 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 11:18:31.088856  118797 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.61.132
	I1101 11:18:31.088902  118797 kubeadm.go:1161] stopping kube-system containers ...
	I1101 11:18:31.088917  118797 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1101 11:18:31.088992  118797 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 11:18:31.141718  118797 cri.go:89] found id: ""
	I1101 11:18:31.141802  118797 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 11:18:31.168517  118797 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 11:18:31.186956  118797 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 11:18:31.186983  118797 kubeadm.go:158] found existing configuration files:
	
	I1101 11:18:31.187043  118797 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 11:18:31.204114  118797 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 11:18:31.204198  118797 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 11:18:31.221331  118797 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 11:18:31.240377  118797 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 11:18:31.240445  118797 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 11:18:31.258829  118797 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 11:18:31.277183  118797 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 11:18:31.277257  118797 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 11:18:31.291526  118797 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 11:18:31.304957  118797 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 11:18:31.305026  118797 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 11:18:31.319125  118797 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 11:18:31.332409  118797 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:18:31.413339  118797 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:18:32.850750  118797 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.437367853s)
	I1101 11:18:32.850827  118797 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:18:33.160969  118797 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:18:33.248837  118797 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:18:33.352582  118797 api_server.go:52] waiting for apiserver process to appear ...
	I1101 11:18:33.352690  118797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:18:33.853451  118797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:18:34.353702  118797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:18:34.853670  118797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:18:34.892010  118797 api_server.go:72] duration metric: took 1.539441132s to wait for apiserver process to appear ...
	I1101 11:18:34.892046  118797 api_server.go:88] waiting for apiserver healthz status ...
	I1101 11:18:34.892083  118797 api_server.go:253] Checking apiserver healthz at https://192.168.61.132:8443/healthz ...
	I1101 11:18:34.296799  119092 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.72.189:22: connect: no route to host
	I1101 11:18:34.791192  118233 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 11:18:34.791209  118233 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 11:18:34.791376  118233 main.go:143] libmachine: domain no-preload-294319 has defined MAC address 52:54:00:ee:b7:c6 in network mk-no-preload-294319
	I1101 11:18:34.792320  118233 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ee:b7:c6", ip: ""} in network mk-no-preload-294319: {Iface:virbr1 ExpiryTime:2025-11-01 12:17:58 +0000 UTC Type:0 Mac:52:54:00:ee:b7:c6 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:no-preload-294319 Clientid:01:52:54:00:ee:b7:c6}
	I1101 11:18:34.792363  118233 main.go:143] libmachine: domain no-preload-294319 has defined IP address 192.168.39.49 and MAC address 52:54:00:ee:b7:c6 in network mk-no-preload-294319
	I1101 11:18:34.793325  118233 main.go:143] libmachine: domain no-preload-294319 has defined MAC address 52:54:00:ee:b7:c6 in network mk-no-preload-294319
	I1101 11:18:34.793582  118233 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/no-preload-294319/id_rsa Username:docker}
	I1101 11:18:34.794950  118233 main.go:143] libmachine: domain no-preload-294319 has defined MAC address 52:54:00:ee:b7:c6 in network mk-no-preload-294319
	I1101 11:18:34.795199  118233 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ee:b7:c6", ip: ""} in network mk-no-preload-294319: {Iface:virbr1 ExpiryTime:2025-11-01 12:17:58 +0000 UTC Type:0 Mac:52:54:00:ee:b7:c6 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:no-preload-294319 Clientid:01:52:54:00:ee:b7:c6}
	I1101 11:18:34.795238  118233 main.go:143] libmachine: domain no-preload-294319 has defined IP address 192.168.39.49 and MAC address 52:54:00:ee:b7:c6 in network mk-no-preload-294319
	I1101 11:18:34.795620  118233 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/no-preload-294319/id_rsa Username:docker}
	I1101 11:18:34.795647  118233 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ee:b7:c6", ip: ""} in network mk-no-preload-294319: {Iface:virbr1 ExpiryTime:2025-11-01 12:17:58 +0000 UTC Type:0 Mac:52:54:00:ee:b7:c6 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:no-preload-294319 Clientid:01:52:54:00:ee:b7:c6}
	I1101 11:18:34.795673  118233 main.go:143] libmachine: domain no-preload-294319 has defined IP address 192.168.39.49 and MAC address 52:54:00:ee:b7:c6 in network mk-no-preload-294319
	I1101 11:18:34.796227  118233 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/no-preload-294319/id_rsa Username:docker}
	I1101 11:18:34.797218  118233 main.go:143] libmachine: domain no-preload-294319 has defined MAC address 52:54:00:ee:b7:c6 in network mk-no-preload-294319
	I1101 11:18:34.797645  118233 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ee:b7:c6", ip: ""} in network mk-no-preload-294319: {Iface:virbr1 ExpiryTime:2025-11-01 12:17:58 +0000 UTC Type:0 Mac:52:54:00:ee:b7:c6 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:no-preload-294319 Clientid:01:52:54:00:ee:b7:c6}
	I1101 11:18:34.797675  118233 main.go:143] libmachine: domain no-preload-294319 has defined IP address 192.168.39.49 and MAC address 52:54:00:ee:b7:c6 in network mk-no-preload-294319
	I1101 11:18:34.797859  118233 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/no-preload-294319/id_rsa Username:docker}
	I1101 11:18:35.220385  118233 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:18:35.272999  118233 node_ready.go:35] waiting up to 6m0s for node "no-preload-294319" to be "Ready" ...
	I1101 11:18:35.278179  118233 node_ready.go:49] node "no-preload-294319" is "Ready"
	I1101 11:18:35.278213  118233 node_ready.go:38] duration metric: took 5.166878ms for node "no-preload-294319" to be "Ready" ...
	I1101 11:18:35.278233  118233 api_server.go:52] waiting for apiserver process to appear ...
	I1101 11:18:35.278309  118233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:18:35.332707  118233 api_server.go:72] duration metric: took 552.992291ms to wait for apiserver process to appear ...
	I1101 11:18:35.332737  118233 api_server.go:88] waiting for apiserver healthz status ...
	I1101 11:18:35.332759  118233 api_server.go:253] Checking apiserver healthz at https://192.168.39.49:8443/healthz ...
	I1101 11:18:35.344415  118233 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 11:18:35.344448  118233 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 11:18:35.347678  118233 api_server.go:279] https://192.168.39.49:8443/healthz returned 200:
	ok
	I1101 11:18:35.350113  118233 api_server.go:141] control plane version: v1.34.1
	I1101 11:18:35.350140  118233 api_server.go:131] duration metric: took 17.395507ms to wait for apiserver health ...
	I1101 11:18:35.350150  118233 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 11:18:35.358801  118233 system_pods.go:59] 8 kube-system pods found
	I1101 11:18:35.358835  118233 system_pods.go:61] "coredns-66bc5c9577-x57vz" [eb2f3b71-41f2-4ae3-ac71-9ccc871abfc8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 11:18:35.358844  118233 system_pods.go:61] "etcd-no-preload-294319" [f4aadb8a-a6a7-4936-98fa-6e662ff2471d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 11:18:35.358866  118233 system_pods.go:61] "kube-apiserver-no-preload-294319" [fe68f1cd-151d-472c-955d-6c425117c91d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 11:18:35.358874  118233 system_pods.go:61] "kube-controller-manager-no-preload-294319" [efb452de-2f7a-4212-96c5-e5a8780b7694] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 11:18:35.358887  118233 system_pods.go:61] "kube-proxy-2qfgw" [f2d91d64-ec0c-45bf-bf3d-23b5dd8a78e4] Running
	I1101 11:18:35.358894  118233 system_pods.go:61] "kube-scheduler-no-preload-294319" [ec579a88-3103-48ff-b1cf-3463d6080e8a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 11:18:35.358901  118233 system_pods.go:61] "metrics-server-746fcd58dc-dn4qd" [27a30dc7-b5c2-4eae-979d-72266debe708] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 11:18:35.358911  118233 system_pods.go:61] "storage-provisioner" [3af75b2c-851c-4786-8aab-77980cca46b5] Running
	I1101 11:18:35.358918  118233 system_pods.go:74] duration metric: took 8.761322ms to wait for pod list to return data ...
	I1101 11:18:35.358927  118233 default_sa.go:34] waiting for default service account to be created ...
	I1101 11:18:35.363863  118233 default_sa.go:45] found service account: "default"
	I1101 11:18:35.363887  118233 default_sa.go:55] duration metric: took 4.950065ms for default service account to be created ...
	I1101 11:18:35.363897  118233 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 11:18:35.367716  118233 system_pods.go:86] 8 kube-system pods found
	I1101 11:18:35.367748  118233 system_pods.go:89] "coredns-66bc5c9577-x57vz" [eb2f3b71-41f2-4ae3-ac71-9ccc871abfc8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 11:18:35.367758  118233 system_pods.go:89] "etcd-no-preload-294319" [f4aadb8a-a6a7-4936-98fa-6e662ff2471d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 11:18:35.367769  118233 system_pods.go:89] "kube-apiserver-no-preload-294319" [fe68f1cd-151d-472c-955d-6c425117c91d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 11:18:35.367777  118233 system_pods.go:89] "kube-controller-manager-no-preload-294319" [efb452de-2f7a-4212-96c5-e5a8780b7694] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 11:18:35.367783  118233 system_pods.go:89] "kube-proxy-2qfgw" [f2d91d64-ec0c-45bf-bf3d-23b5dd8a78e4] Running
	I1101 11:18:35.367791  118233 system_pods.go:89] "kube-scheduler-no-preload-294319" [ec579a88-3103-48ff-b1cf-3463d6080e8a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 11:18:35.367804  118233 system_pods.go:89] "metrics-server-746fcd58dc-dn4qd" [27a30dc7-b5c2-4eae-979d-72266debe708] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 11:18:35.367814  118233 system_pods.go:89] "storage-provisioner" [3af75b2c-851c-4786-8aab-77980cca46b5] Running
	I1101 11:18:35.367825  118233 system_pods.go:126] duration metric: took 3.92079ms to wait for k8s-apps to be running ...
	I1101 11:18:35.367839  118233 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 11:18:35.367895  118233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:18:35.404510  118233 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 11:18:35.404562  118233 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 11:18:35.405800  118233 system_svc.go:56] duration metric: took 37.952183ms WaitForService to wait for kubelet
	I1101 11:18:35.405826  118233 kubeadm.go:587] duration metric: took 626.118166ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 11:18:35.405847  118233 node_conditions.go:102] verifying NodePressure condition ...
	I1101 11:18:35.412815  118233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 11:18:35.412842  118233 node_conditions.go:123] node cpu capacity is 2
	I1101 11:18:35.412879  118233 node_conditions.go:105] duration metric: took 7.02532ms to run NodePressure ...
	I1101 11:18:35.412895  118233 start.go:242] waiting for startup goroutines ...
	I1101 11:18:35.445510  118233 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 11:18:35.445567  118233 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 11:18:35.481864  118233 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 11:18:35.481896  118233 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 11:18:35.521069  118233 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 11:18:35.521101  118233 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 11:18:35.567949  118233 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 11:18:35.584421  118233 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 11:18:35.584452  118233 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 11:18:35.613219  118233 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 11:18:35.613245  118233 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1101 11:18:35.614247  118233 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 11:18:35.677563  118233 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 11:18:35.677594  118233 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 11:18:35.688305  118233 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 11:18:35.688351  118233 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 11:18:35.768271  118233 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 11:18:35.768298  118233 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 11:18:35.783761  118233 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 11:18:35.783805  118233 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 11:18:35.863966  118233 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 11:18:35.863999  118233 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 11:18:35.879046  118233 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 11:18:35.950088  118233 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 11:18:38.451481  118233 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.837193272s)
	I1101 11:18:38.452011  118233 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.884022662s)
	I1101 11:18:38.527182  118233 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.648090685s)
	I1101 11:18:38.527241  118233 addons.go:480] Verifying addon metrics-server=true in "no-preload-294319"
	I1101 11:18:38.570181  118233 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.620045696s)
	I1101 11:18:38.571964  118233 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-294319 addons enable metrics-server
	
	I1101 11:18:38.574144  118233 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1101 11:18:37.674209  118797 api_server.go:279] https://192.168.61.132:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 11:18:37.674269  118797 api_server.go:103] status: https://192.168.61.132:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 11:18:37.674291  118797 api_server.go:253] Checking apiserver healthz at https://192.168.61.132:8443/healthz ...
	I1101 11:18:37.780703  118797 api_server.go:279] https://192.168.61.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:18:37.780738  118797 api_server.go:103] status: https://192.168.61.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:18:37.893073  118797 api_server.go:253] Checking apiserver healthz at https://192.168.61.132:8443/healthz ...
	I1101 11:18:37.919844  118797 api_server.go:279] https://192.168.61.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:18:37.919953  118797 api_server.go:103] status: https://192.168.61.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:18:38.392229  118797 api_server.go:253] Checking apiserver healthz at https://192.168.61.132:8443/healthz ...
	I1101 11:18:38.441338  118797 api_server.go:279] https://192.168.61.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:18:38.441372  118797 api_server.go:103] status: https://192.168.61.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:18:38.893047  118797 api_server.go:253] Checking apiserver healthz at https://192.168.61.132:8443/healthz ...
	I1101 11:18:38.902100  118797 api_server.go:279] https://192.168.61.132:8443/healthz returned 200:
	ok
	I1101 11:18:38.911904  118797 api_server.go:141] control plane version: v1.34.1
	I1101 11:18:38.911943  118797 api_server.go:131] duration metric: took 4.01988854s to wait for apiserver health ...
	I1101 11:18:38.911958  118797 cni.go:84] Creating CNI manager for ""
	I1101 11:18:38.911967  118797 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 11:18:38.913955  118797 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 11:18:38.575619  118233 addons.go:515] duration metric: took 3.795593493s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1101 11:18:38.575662  118233 start.go:247] waiting for cluster config update ...
	I1101 11:18:38.575680  118233 start.go:256] writing updated cluster config ...
	I1101 11:18:38.575953  118233 ssh_runner.go:195] Run: rm -f paused
	I1101 11:18:38.582127  118233 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 11:18:38.585969  118233 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-x57vz" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:18:38.915317  118797 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 11:18:38.944434  118797 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1101 11:18:38.993920  118797 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 11:18:39.000090  118797 system_pods.go:59] 8 kube-system pods found
	I1101 11:18:39.000176  118797 system_pods.go:61] "coredns-66bc5c9577-w7cfg" [c0f904f6-44f6-4996-92dc-3fb6a537f96c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 11:18:39.000194  118797 system_pods.go:61] "etcd-embed-certs-571864" [770ba541-6fe5-4e10-84d7-ecf8f6d626f8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 11:18:39.000208  118797 system_pods.go:61] "kube-apiserver-embed-certs-571864" [c9e8f5fd-436e-48aa-b2b2-f9a9564f2279] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 11:18:39.000226  118797 system_pods.go:61] "kube-controller-manager-embed-certs-571864" [2356aebd-c6e3-40e5-a125-b436db7c3a48] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 11:18:39.000244  118797 system_pods.go:61] "kube-proxy-6ddph" [50935e47-809d-4324-8200-148a11692fa8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 11:18:39.000253  118797 system_pods.go:61] "kube-scheduler-embed-certs-571864" [11e5224c-7c54-489f-8396-283ed5892ff9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 11:18:39.000264  118797 system_pods.go:61] "metrics-server-746fcd58dc-8xq94" [319dd232-8ff5-4e8c-bb5a-c165604476c9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 11:18:39.000272  118797 system_pods.go:61] "storage-provisioner" [c5bbb77a-fba5-4683-be08-22021d7600b8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 11:18:39.000285  118797 system_pods.go:74] duration metric: took 6.33173ms to wait for pod list to return data ...
	I1101 11:18:39.000297  118797 node_conditions.go:102] verifying NodePressure condition ...
	I1101 11:18:39.008010  118797 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 11:18:39.008052  118797 node_conditions.go:123] node cpu capacity is 2
	I1101 11:18:39.008069  118797 node_conditions.go:105] duration metric: took 7.765191ms to run NodePressure ...
	I1101 11:18:39.008147  118797 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:18:39.471157  118797 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1101 11:18:39.476477  118797 kubeadm.go:744] kubelet initialised
	I1101 11:18:39.476509  118797 kubeadm.go:745] duration metric: took 5.319514ms waiting for restarted kubelet to initialise ...
	I1101 11:18:39.476551  118797 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 11:18:39.503024  118797 ops.go:34] apiserver oom_adj: -16
	I1101 11:18:39.503053  118797 kubeadm.go:602] duration metric: took 8.518294777s to restartPrimaryControlPlane
	I1101 11:18:39.503067  118797 kubeadm.go:403] duration metric: took 8.593364705s to StartCluster
	I1101 11:18:39.503107  118797 settings.go:142] acquiring lock: {Name:mk26e3d3b2448df59827bb1be60cde1d117dbc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:18:39.503214  118797 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-70113/kubeconfig
	I1101 11:18:39.504891  118797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/kubeconfig: {Name:mk1fa0677a0758214359bfdd6f326495ee5fd60e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:18:39.505219  118797 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.61.132 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 11:18:39.505306  118797 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 11:18:39.505432  118797 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-571864"
	I1101 11:18:39.505439  118797 addons.go:70] Setting dashboard=true in profile "embed-certs-571864"
	I1101 11:18:39.505456  118797 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-571864"
	W1101 11:18:39.505467  118797 addons.go:248] addon storage-provisioner should already be in state true
	I1101 11:18:39.505468  118797 addons.go:239] Setting addon dashboard=true in "embed-certs-571864"
	W1101 11:18:39.505486  118797 addons.go:248] addon dashboard should already be in state true
	I1101 11:18:39.505499  118797 host.go:66] Checking if "embed-certs-571864" exists ...
	I1101 11:18:39.505509  118797 config.go:182] Loaded profile config "embed-certs-571864": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:18:39.505541  118797 host.go:66] Checking if "embed-certs-571864" exists ...
	I1101 11:18:39.505588  118797 addons.go:70] Setting metrics-server=true in profile "embed-certs-571864"
	I1101 11:18:39.505612  118797 addons.go:239] Setting addon metrics-server=true in "embed-certs-571864"
	I1101 11:18:39.505612  118797 addons.go:70] Setting default-storageclass=true in profile "embed-certs-571864"
	W1101 11:18:39.505622  118797 addons.go:248] addon metrics-server should already be in state true
	I1101 11:18:39.505635  118797 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-571864"
	I1101 11:18:39.505643  118797 host.go:66] Checking if "embed-certs-571864" exists ...
	I1101 11:18:39.507268  118797 out.go:179] * Verifying Kubernetes components...
	I1101 11:18:39.508435  118797 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:18:39.510190  118797 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 11:18:39.510215  118797 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1101 11:18:39.510221  118797 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 11:18:39.510883  118797 addons.go:239] Setting addon default-storageclass=true in "embed-certs-571864"
	W1101 11:18:39.510904  118797 addons.go:248] addon default-storageclass should already be in state true
	I1101 11:18:39.510927  118797 host.go:66] Checking if "embed-certs-571864" exists ...
	I1101 11:18:39.511409  118797 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 11:18:39.511430  118797 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 11:18:39.511416  118797 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 11:18:39.511596  118797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 11:18:39.512695  118797 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 11:18:39.513453  118797 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 11:18:39.513472  118797 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 11:18:39.514193  118797 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 11:18:39.514213  118797 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 11:18:39.516402  118797 main.go:143] libmachine: domain embed-certs-571864 has defined MAC address 52:54:00:d1:0a:19 in network mk-embed-certs-571864
	I1101 11:18:39.516615  118797 main.go:143] libmachine: domain embed-certs-571864 has defined MAC address 52:54:00:d1:0a:19 in network mk-embed-certs-571864
	I1101 11:18:39.517468  118797 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d1:0a:19", ip: ""} in network mk-embed-certs-571864: {Iface:virbr3 ExpiryTime:2025-11-01 12:18:17 +0000 UTC Type:0 Mac:52:54:00:d1:0a:19 Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:embed-certs-571864 Clientid:01:52:54:00:d1:0a:19}
	I1101 11:18:39.517506  118797 main.go:143] libmachine: domain embed-certs-571864 has defined IP address 192.168.61.132 and MAC address 52:54:00:d1:0a:19 in network mk-embed-certs-571864
	I1101 11:18:39.517582  118797 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d1:0a:19", ip: ""} in network mk-embed-certs-571864: {Iface:virbr3 ExpiryTime:2025-11-01 12:18:17 +0000 UTC Type:0 Mac:52:54:00:d1:0a:19 Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:embed-certs-571864 Clientid:01:52:54:00:d1:0a:19}
	I1101 11:18:39.517617  118797 main.go:143] libmachine: domain embed-certs-571864 has defined IP address 192.168.61.132 and MAC address 52:54:00:d1:0a:19 in network mk-embed-certs-571864
	I1101 11:18:39.517713  118797 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/embed-certs-571864/id_rsa Username:docker}
	I1101 11:18:39.518276  118797 main.go:143] libmachine: domain embed-certs-571864 has defined MAC address 52:54:00:d1:0a:19 in network mk-embed-certs-571864
	I1101 11:18:39.518474  118797 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/embed-certs-571864/id_rsa Username:docker}
	I1101 11:18:39.518958  118797 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d1:0a:19", ip: ""} in network mk-embed-certs-571864: {Iface:virbr3 ExpiryTime:2025-11-01 12:18:17 +0000 UTC Type:0 Mac:52:54:00:d1:0a:19 Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:embed-certs-571864 Clientid:01:52:54:00:d1:0a:19}
	I1101 11:18:39.518992  118797 main.go:143] libmachine: domain embed-certs-571864 has defined IP address 192.168.61.132 and MAC address 52:54:00:d1:0a:19 in network mk-embed-certs-571864
	I1101 11:18:39.519188  118797 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/embed-certs-571864/id_rsa Username:docker}
	I1101 11:18:39.519411  118797 main.go:143] libmachine: domain embed-certs-571864 has defined MAC address 52:54:00:d1:0a:19 in network mk-embed-certs-571864
	I1101 11:18:39.519970  118797 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d1:0a:19", ip: ""} in network mk-embed-certs-571864: {Iface:virbr3 ExpiryTime:2025-11-01 12:18:17 +0000 UTC Type:0 Mac:52:54:00:d1:0a:19 Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:embed-certs-571864 Clientid:01:52:54:00:d1:0a:19}
	I1101 11:18:39.520000  118797 main.go:143] libmachine: domain embed-certs-571864 has defined IP address 192.168.61.132 and MAC address 52:54:00:d1:0a:19 in network mk-embed-certs-571864
	I1101 11:18:39.520229  118797 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/embed-certs-571864/id_rsa Username:docker}
	I1101 11:18:39.837212  118797 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:18:39.875438  118797 node_ready.go:35] waiting up to 6m0s for node "embed-certs-571864" to be "Ready" ...
	I1101 11:18:38.330029  119092 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.72.189:22: connect: connection refused
	I1101 11:18:40.157446  118797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 11:18:40.182830  118797 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 11:18:40.182869  118797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1101 11:18:40.207032  118797 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 11:18:40.207065  118797 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 11:18:40.224171  118797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 11:18:40.245899  118797 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 11:18:40.245932  118797 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 11:18:40.287906  118797 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 11:18:40.287943  118797 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 11:18:40.301192  118797 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 11:18:40.301222  118797 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 11:18:40.410546  118797 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 11:18:40.410574  118797 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 11:18:40.426329  118797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 11:18:40.498210  118797 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 11:18:40.498243  118797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 11:18:40.586946  118797 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 11:18:40.586975  118797 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 11:18:40.684496  118797 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 11:18:40.684524  118797 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 11:18:40.747690  118797 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 11:18:40.747717  118797 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 11:18:40.794606  118797 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 11:18:40.794635  118797 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 11:18:40.856557  118797 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 11:18:40.856583  118797 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 11:18:40.922648  118797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1101 11:18:41.882343  118797 node_ready.go:57] node "embed-certs-571864" has "Ready":"False" status (will retry)
	I1101 11:18:42.027889  118797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.870394457s)
	I1101 11:18:42.027963  118797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.803748294s)
	I1101 11:18:42.035246  118797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.608877428s)
	I1101 11:18:42.035285  118797 addons.go:480] Verifying addon metrics-server=true in "embed-certs-571864"
	I1101 11:18:42.490212  118797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.567507869s)
	I1101 11:18:42.492205  118797 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-571864 addons enable metrics-server
	
	I1101 11:18:42.493903  118797 out.go:179] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I1101 11:18:43.418175  119309 start.go:364] duration metric: took 16.758927327s to acquireMachinesLock for "newest-cni-268638"
	I1101 11:18:43.418233  119309 start.go:96] Skipping create...Using existing machine configuration
	I1101 11:18:43.418240  119309 fix.go:54] fixHost starting: 
	I1101 11:18:43.421209  119309 fix.go:112] recreateIfNeeded on newest-cni-268638: state=Stopped err=<nil>
	W1101 11:18:43.421247  119309 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 11:18:40.593855  118233 pod_ready.go:94] pod "coredns-66bc5c9577-x57vz" is "Ready"
	I1101 11:18:40.593893  118233 pod_ready.go:86] duration metric: took 2.007903056s for pod "coredns-66bc5c9577-x57vz" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:18:40.600654  118233 pod_ready.go:83] waiting for pod "etcd-no-preload-294319" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 11:18:42.609951  118233 pod_ready.go:104] pod "etcd-no-preload-294319" is not "Ready", error: <nil>
	I1101 11:18:43.612210  118233 pod_ready.go:94] pod "etcd-no-preload-294319" is "Ready"
	I1101 11:18:43.612245  118233 pod_ready.go:86] duration metric: took 3.011556469s for pod "etcd-no-preload-294319" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:18:43.616419  118233 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-294319" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:18:42.495295  118797 addons.go:515] duration metric: took 2.990004035s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	W1101 11:18:44.380461  118797 node_ready.go:57] node "embed-certs-571864" has "Ready":"False" status (will retry)
	I1101 11:18:41.456668  119092 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 11:18:41.461302  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:41.461846  119092 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3a:46", ip: ""} in network mk-default-k8s-diff-port-287419: {Iface:virbr4 ExpiryTime:2025-11-01 12:18:38 +0000 UTC Type:0 Mac:52:54:00:8d:3a:46 Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:default-k8s-diff-port-287419 Clientid:01:52:54:00:8d:3a:46}
	I1101 11:18:41.461891  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined IP address 192.168.72.189 and MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:41.462316  119092 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/default-k8s-diff-port-287419/config.json ...
	I1101 11:18:41.462586  119092 machine.go:94] provisionDockerMachine start ...
	I1101 11:18:41.465685  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:41.466175  119092 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3a:46", ip: ""} in network mk-default-k8s-diff-port-287419: {Iface:virbr4 ExpiryTime:2025-11-01 12:18:38 +0000 UTC Type:0 Mac:52:54:00:8d:3a:46 Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:default-k8s-diff-port-287419 Clientid:01:52:54:00:8d:3a:46}
	I1101 11:18:41.466210  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined IP address 192.168.72.189 and MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:41.466455  119092 main.go:143] libmachine: Using SSH client type: native
	I1101 11:18:41.466750  119092 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.72.189 22 <nil> <nil>}
	I1101 11:18:41.466770  119092 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 11:18:41.592488  119092 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1101 11:18:41.592519  119092 buildroot.go:166] provisioning hostname "default-k8s-diff-port-287419"
	I1101 11:18:41.596132  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:41.596670  119092 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3a:46", ip: ""} in network mk-default-k8s-diff-port-287419: {Iface:virbr4 ExpiryTime:2025-11-01 12:18:38 +0000 UTC Type:0 Mac:52:54:00:8d:3a:46 Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:default-k8s-diff-port-287419 Clientid:01:52:54:00:8d:3a:46}
	I1101 11:18:41.596707  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined IP address 192.168.72.189 and MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:41.596947  119092 main.go:143] libmachine: Using SSH client type: native
	I1101 11:18:41.597275  119092 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.72.189 22 <nil> <nil>}
	I1101 11:18:41.597300  119092 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-287419 && echo "default-k8s-diff-port-287419" | sudo tee /etc/hostname
	I1101 11:18:41.751054  119092 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-287419
	
	I1101 11:18:41.755077  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:41.755663  119092 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3a:46", ip: ""} in network mk-default-k8s-diff-port-287419: {Iface:virbr4 ExpiryTime:2025-11-01 12:18:38 +0000 UTC Type:0 Mac:52:54:00:8d:3a:46 Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:default-k8s-diff-port-287419 Clientid:01:52:54:00:8d:3a:46}
	I1101 11:18:41.755701  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined IP address 192.168.72.189 and MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:41.755942  119092 main.go:143] libmachine: Using SSH client type: native
	I1101 11:18:41.756227  119092 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.72.189 22 <nil> <nil>}
	I1101 11:18:41.756264  119092 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-287419' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-287419/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-287419' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 11:18:41.894839  119092 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 11:18:41.894879  119092 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21830-70113/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-70113/.minikube}
	I1101 11:18:41.894961  119092 buildroot.go:174] setting up certificates
	I1101 11:18:41.894980  119092 provision.go:84] configureAuth start
	I1101 11:18:41.898652  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:41.899216  119092 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3a:46", ip: ""} in network mk-default-k8s-diff-port-287419: {Iface:virbr4 ExpiryTime:2025-11-01 12:18:38 +0000 UTC Type:0 Mac:52:54:00:8d:3a:46 Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:default-k8s-diff-port-287419 Clientid:01:52:54:00:8d:3a:46}
	I1101 11:18:41.899255  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined IP address 192.168.72.189 and MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:41.902742  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:41.903260  119092 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3a:46", ip: ""} in network mk-default-k8s-diff-port-287419: {Iface:virbr4 ExpiryTime:2025-11-01 12:18:38 +0000 UTC Type:0 Mac:52:54:00:8d:3a:46 Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:default-k8s-diff-port-287419 Clientid:01:52:54:00:8d:3a:46}
	I1101 11:18:41.903309  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined IP address 192.168.72.189 and MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:41.903463  119092 provision.go:143] copyHostCerts
	I1101 11:18:41.903526  119092 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-70113/.minikube/ca.pem, removing ...
	I1101 11:18:41.903562  119092 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-70113/.minikube/ca.pem
	I1101 11:18:41.903662  119092 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-70113/.minikube/ca.pem (1082 bytes)
	I1101 11:18:41.903798  119092 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-70113/.minikube/cert.pem, removing ...
	I1101 11:18:41.903816  119092 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-70113/.minikube/cert.pem
	I1101 11:18:41.903869  119092 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-70113/.minikube/cert.pem (1123 bytes)
	I1101 11:18:41.903964  119092 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-70113/.minikube/key.pem, removing ...
	I1101 11:18:41.903978  119092 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-70113/.minikube/key.pem
	I1101 11:18:41.904020  119092 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-70113/.minikube/key.pem (1675 bytes)
	I1101 11:18:41.904117  119092 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-70113/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-287419 san=[127.0.0.1 192.168.72.189 default-k8s-diff-port-287419 localhost minikube]
	I1101 11:18:42.668830  119092 provision.go:177] copyRemoteCerts
	I1101 11:18:42.668897  119092 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 11:18:42.672740  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:42.673289  119092 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3a:46", ip: ""} in network mk-default-k8s-diff-port-287419: {Iface:virbr4 ExpiryTime:2025-11-01 12:18:38 +0000 UTC Type:0 Mac:52:54:00:8d:3a:46 Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:default-k8s-diff-port-287419 Clientid:01:52:54:00:8d:3a:46}
	I1101 11:18:42.673322  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined IP address 192.168.72.189 and MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:42.673497  119092 sshutil.go:53] new ssh client: &{IP:192.168.72.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/default-k8s-diff-port-287419/id_rsa Username:docker}
	I1101 11:18:42.768628  119092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 11:18:42.804283  119092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1101 11:18:42.841345  119092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 11:18:42.877246  119092 provision.go:87] duration metric: took 982.248219ms to configureAuth
	I1101 11:18:42.877277  119092 buildroot.go:189] setting minikube options for container-runtime
	I1101 11:18:42.877486  119092 config.go:182] Loaded profile config "default-k8s-diff-port-287419": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:18:42.881112  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:42.881569  119092 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3a:46", ip: ""} in network mk-default-k8s-diff-port-287419: {Iface:virbr4 ExpiryTime:2025-11-01 12:18:38 +0000 UTC Type:0 Mac:52:54:00:8d:3a:46 Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:default-k8s-diff-port-287419 Clientid:01:52:54:00:8d:3a:46}
	I1101 11:18:42.881597  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined IP address 192.168.72.189 and MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:42.881942  119092 main.go:143] libmachine: Using SSH client type: native
	I1101 11:18:42.882150  119092 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.72.189 22 <nil> <nil>}
	I1101 11:18:42.882164  119092 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 11:18:43.154660  119092 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 11:18:43.154696  119092 machine.go:97] duration metric: took 1.692092034s to provisionDockerMachine
	I1101 11:18:43.154717  119092 start.go:293] postStartSetup for "default-k8s-diff-port-287419" (driver="kvm2")
	I1101 11:18:43.154737  119092 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 11:18:43.154856  119092 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 11:18:43.158201  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:43.158765  119092 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3a:46", ip: ""} in network mk-default-k8s-diff-port-287419: {Iface:virbr4 ExpiryTime:2025-11-01 12:18:38 +0000 UTC Type:0 Mac:52:54:00:8d:3a:46 Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:default-k8s-diff-port-287419 Clientid:01:52:54:00:8d:3a:46}
	I1101 11:18:43.158814  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined IP address 192.168.72.189 and MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:43.159025  119092 sshutil.go:53] new ssh client: &{IP:192.168.72.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/default-k8s-diff-port-287419/id_rsa Username:docker}
	I1101 11:18:43.245201  119092 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 11:18:43.251371  119092 info.go:137] Remote host: Buildroot 2025.02
	I1101 11:18:43.251408  119092 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-70113/.minikube/addons for local assets ...
	I1101 11:18:43.251487  119092 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-70113/.minikube/files for local assets ...
	I1101 11:18:43.251587  119092 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-70113/.minikube/files/etc/ssl/certs/739982.pem -> 739982.pem in /etc/ssl/certs
	I1101 11:18:43.251681  119092 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 11:18:43.264422  119092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/files/etc/ssl/certs/739982.pem --> /etc/ssl/certs/739982.pem (1708 bytes)
	I1101 11:18:43.299403  119092 start.go:296] duration metric: took 144.66394ms for postStartSetup
	I1101 11:18:43.299451  119092 fix.go:56] duration metric: took 19.996599515s for fixHost
	I1101 11:18:43.302625  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:43.303139  119092 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3a:46", ip: ""} in network mk-default-k8s-diff-port-287419: {Iface:virbr4 ExpiryTime:2025-11-01 12:18:38 +0000 UTC Type:0 Mac:52:54:00:8d:3a:46 Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:default-k8s-diff-port-287419 Clientid:01:52:54:00:8d:3a:46}
	I1101 11:18:43.303168  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined IP address 192.168.72.189 and MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:43.303320  119092 main.go:143] libmachine: Using SSH client type: native
	I1101 11:18:43.303555  119092 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.72.189 22 <nil> <nil>}
	I1101 11:18:43.303566  119092 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1101 11:18:43.418003  119092 main.go:143] libmachine: SSH cmd err, output: <nil>: 1761995923.381565826
	
	I1101 11:18:43.418026  119092 fix.go:216] guest clock: 1761995923.381565826
	I1101 11:18:43.418038  119092 fix.go:229] Guest: 2025-11-01 11:18:43.381565826 +0000 UTC Remote: 2025-11-01 11:18:43.299455347 +0000 UTC m=+38.090698708 (delta=82.110479ms)
	I1101 11:18:43.418081  119092 fix.go:200] guest clock delta is within tolerance: 82.110479ms
	I1101 11:18:43.418095  119092 start.go:83] releasing machines lock for "default-k8s-diff-port-287419", held for 20.115269056s
	I1101 11:18:43.422245  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:43.422873  119092 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3a:46", ip: ""} in network mk-default-k8s-diff-port-287419: {Iface:virbr4 ExpiryTime:2025-11-01 12:18:38 +0000 UTC Type:0 Mac:52:54:00:8d:3a:46 Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:default-k8s-diff-port-287419 Clientid:01:52:54:00:8d:3a:46}
	I1101 11:18:43.422922  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined IP address 192.168.72.189 and MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:43.423656  119092 ssh_runner.go:195] Run: cat /version.json
	I1101 11:18:43.423745  119092 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 11:18:43.427633  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:43.428098  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:43.428788  119092 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3a:46", ip: ""} in network mk-default-k8s-diff-port-287419: {Iface:virbr4 ExpiryTime:2025-11-01 12:18:38 +0000 UTC Type:0 Mac:52:54:00:8d:3a:46 Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:default-k8s-diff-port-287419 Clientid:01:52:54:00:8d:3a:46}
	I1101 11:18:43.428841  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined IP address 192.168.72.189 and MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:43.428920  119092 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3a:46", ip: ""} in network mk-default-k8s-diff-port-287419: {Iface:virbr4 ExpiryTime:2025-11-01 12:18:38 +0000 UTC Type:0 Mac:52:54:00:8d:3a:46 Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:default-k8s-diff-port-287419 Clientid:01:52:54:00:8d:3a:46}
	I1101 11:18:43.428967  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined IP address 192.168.72.189 and MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:43.429056  119092 sshutil.go:53] new ssh client: &{IP:192.168.72.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/default-k8s-diff-port-287419/id_rsa Username:docker}
	I1101 11:18:43.429264  119092 sshutil.go:53] new ssh client: &{IP:192.168.72.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/default-k8s-diff-port-287419/id_rsa Username:docker}
	I1101 11:18:43.534909  119092 ssh_runner.go:195] Run: systemctl --version
	I1101 11:18:43.542779  119092 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 11:18:43.705592  119092 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 11:18:43.717179  119092 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 11:18:43.717261  119092 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 11:18:43.749011  119092 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 11:18:43.749051  119092 start.go:496] detecting cgroup driver to use...
	I1101 11:18:43.749137  119092 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 11:18:43.772342  119092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 11:18:43.791797  119092 docker.go:218] disabling cri-docker service (if available) ...
	I1101 11:18:43.791870  119092 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 11:18:43.811527  119092 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 11:18:43.834526  119092 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 11:18:44.023287  119092 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 11:18:44.275548  119092 docker.go:234] disabling docker service ...
	I1101 11:18:44.275630  119092 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 11:18:44.299729  119092 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 11:18:44.322944  119092 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 11:18:44.580019  119092 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 11:18:44.751299  119092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 11:18:44.778037  119092 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 11:18:44.813744  119092 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 11:18:44.813818  119092 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:18:44.832513  119092 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 11:18:44.832603  119092 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:18:44.848201  119092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:18:44.863420  119092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:18:44.884202  119092 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 11:18:44.900851  119092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:18:44.918106  119092 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:18:44.949127  119092 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:18:44.965351  119092 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 11:18:44.978495  119092 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 11:18:44.978603  119092 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 11:18:45.005439  119092 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 11:18:45.023377  119092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:18:45.237718  119092 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 11:18:45.492133  119092 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 11:18:45.492225  119092 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 11:18:45.499840  119092 start.go:564] Will wait 60s for crictl version
	I1101 11:18:45.499923  119092 ssh_runner.go:195] Run: which crictl
	I1101 11:18:45.505873  119092 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 11:18:45.562824  119092 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1101 11:18:45.562918  119092 ssh_runner.go:195] Run: crio --version
	I1101 11:18:45.604231  119092 ssh_runner.go:195] Run: crio --version
	I1101 11:18:45.652733  119092 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1101 11:18:45.638411  118233 pod_ready.go:94] pod "kube-apiserver-no-preload-294319" is "Ready"
	I1101 11:18:45.638450  118233 pod_ready.go:86] duration metric: took 2.022002011s for pod "kube-apiserver-no-preload-294319" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:18:45.644242  118233 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-294319" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:18:45.651083  118233 pod_ready.go:94] pod "kube-controller-manager-no-preload-294319" is "Ready"
	I1101 11:18:45.651119  118233 pod_ready.go:86] duration metric: took 6.837894ms for pod "kube-controller-manager-no-preload-294319" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:18:45.656820  118233 pod_ready.go:83] waiting for pod "kube-proxy-2qfgw" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:18:45.676011  118233 pod_ready.go:94] pod "kube-proxy-2qfgw" is "Ready"
	I1101 11:18:45.676047  118233 pod_ready.go:86] duration metric: took 19.197486ms for pod "kube-proxy-2qfgw" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:18:45.680557  118233 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-294319" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:18:46.006840  118233 pod_ready.go:94] pod "kube-scheduler-no-preload-294319" is "Ready"
	I1101 11:18:46.006874  118233 pod_ready.go:86] duration metric: took 326.284636ms for pod "kube-scheduler-no-preload-294319" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:18:46.006890  118233 pod_ready.go:40] duration metric: took 7.424729376s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 11:18:46.076140  118233 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 11:18:46.077877  118233 out.go:179] * Done! kubectl is now configured to use "no-preload-294319" cluster and "default" namespace by default
	I1101 11:18:43.422982  119309 out.go:252] * Restarting existing kvm2 VM for "newest-cni-268638" ...
	I1101 11:18:43.423037  119309 main.go:143] libmachine: starting domain...
	I1101 11:18:43.423053  119309 main.go:143] libmachine: ensuring networks are active...
	I1101 11:18:43.424417  119309 main.go:143] libmachine: Ensuring network default is active
	I1101 11:18:43.425173  119309 main.go:143] libmachine: Ensuring network mk-newest-cni-268638 is active
	I1101 11:18:43.426407  119309 main.go:143] libmachine: getting domain XML...
	I1101 11:18:43.428447  119309 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>newest-cni-268638</name>
	  <uuid>40498d54-a520-4b96-9f84-14615a0fb7fb</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21830-70113/.minikube/machines/newest-cni-268638/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21830-70113/.minikube/machines/newest-cni-268638/newest-cni-268638.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:62:b8:3b'/>
	      <source network='mk-newest-cni-268638'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:0b:98:32'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1101 11:18:45.136752  119309 main.go:143] libmachine: waiting for domain to start...
	I1101 11:18:45.138581  119309 main.go:143] libmachine: domain is now running
	I1101 11:18:45.138602  119309 main.go:143] libmachine: waiting for IP...
	I1101 11:18:45.139581  119309 main.go:143] libmachine: domain newest-cni-268638 has defined MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:18:45.140387  119309 main.go:143] libmachine: domain newest-cni-268638 has current primary IP address 192.168.83.241 and MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:18:45.140405  119309 main.go:143] libmachine: found domain IP: 192.168.83.241
	I1101 11:18:45.140413  119309 main.go:143] libmachine: reserving static IP address...
	I1101 11:18:45.140922  119309 main.go:143] libmachine: found host DHCP lease matching {name: "newest-cni-268638", mac: "52:54:00:62:b8:3b", ip: "192.168.83.241"} in network mk-newest-cni-268638: {Iface:virbr5 ExpiryTime:2025-11-01 12:17:40 +0000 UTC Type:0 Mac:52:54:00:62:b8:3b Iaid: IPaddr:192.168.83.241 Prefix:24 Hostname:newest-cni-268638 Clientid:01:52:54:00:62:b8:3b}
	I1101 11:18:45.140956  119309 main.go:143] libmachine: skip adding static IP to network mk-newest-cni-268638 - found existing host DHCP lease matching {name: "newest-cni-268638", mac: "52:54:00:62:b8:3b", ip: "192.168.83.241"}
	I1101 11:18:45.140967  119309 main.go:143] libmachine: reserved static IP address 192.168.83.241 for domain newest-cni-268638
	I1101 11:18:45.140974  119309 main.go:143] libmachine: waiting for SSH...
	I1101 11:18:45.140981  119309 main.go:143] libmachine: Getting to WaitForSSH function...
	I1101 11:18:45.144764  119309 main.go:143] libmachine: domain newest-cni-268638 has defined MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:18:45.145315  119309 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:62:b8:3b", ip: ""} in network mk-newest-cni-268638: {Iface:virbr5 ExpiryTime:2025-11-01 12:17:40 +0000 UTC Type:0 Mac:52:54:00:62:b8:3b Iaid: IPaddr:192.168.83.241 Prefix:24 Hostname:newest-cni-268638 Clientid:01:52:54:00:62:b8:3b}
	I1101 11:18:45.145357  119309 main.go:143] libmachine: domain newest-cni-268638 has defined IP address 192.168.83.241 and MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:18:45.145605  119309 main.go:143] libmachine: Using SSH client type: native
	I1101 11:18:45.145929  119309 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.83.241 22 <nil> <nil>}
	I1101 11:18:45.145949  119309 main.go:143] libmachine: About to run SSH command:
	exit 0
	W1101 11:18:46.385220  118797 node_ready.go:57] node "embed-certs-571864" has "Ready":"False" status (will retry)
	I1101 11:18:48.384345  118797 node_ready.go:49] node "embed-certs-571864" is "Ready"
	I1101 11:18:48.384409  118797 node_ready.go:38] duration metric: took 8.508911909s for node "embed-certs-571864" to be "Ready" ...
	I1101 11:18:48.384432  118797 api_server.go:52] waiting for apiserver process to appear ...
	I1101 11:18:48.384515  118797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:18:48.425220  118797 api_server.go:72] duration metric: took 8.919952173s to wait for apiserver process to appear ...
	I1101 11:18:48.425259  118797 api_server.go:88] waiting for apiserver healthz status ...
	I1101 11:18:48.425286  118797 api_server.go:253] Checking apiserver healthz at https://192.168.61.132:8443/healthz ...
	I1101 11:18:48.434060  118797 api_server.go:279] https://192.168.61.132:8443/healthz returned 200:
	ok
	I1101 11:18:48.435647  118797 api_server.go:141] control plane version: v1.34.1
	I1101 11:18:48.435681  118797 api_server.go:131] duration metric: took 10.412081ms to wait for apiserver health ...
	I1101 11:18:48.435693  118797 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 11:18:48.440556  118797 system_pods.go:59] 8 kube-system pods found
	I1101 11:18:48.440590  118797 system_pods.go:61] "coredns-66bc5c9577-w7cfg" [c0f904f6-44f6-4996-92dc-3fb6a537f96c] Running
	I1101 11:18:48.440609  118797 system_pods.go:61] "etcd-embed-certs-571864" [770ba541-6fe5-4e10-84d7-ecf8f6d626f8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 11:18:48.440615  118797 system_pods.go:61] "kube-apiserver-embed-certs-571864" [c9e8f5fd-436e-48aa-b2b2-f9a9564f2279] Running
	I1101 11:18:48.440622  118797 system_pods.go:61] "kube-controller-manager-embed-certs-571864" [2356aebd-c6e3-40e5-a125-b436db7c3a48] Running
	I1101 11:18:48.440627  118797 system_pods.go:61] "kube-proxy-6ddph" [50935e47-809d-4324-8200-148a11692fa8] Running
	I1101 11:18:48.440634  118797 system_pods.go:61] "kube-scheduler-embed-certs-571864" [11e5224c-7c54-489f-8396-283ed5892ff9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 11:18:48.440642  118797 system_pods.go:61] "metrics-server-746fcd58dc-8xq94" [319dd232-8ff5-4e8c-bb5a-c165604476c9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 11:18:48.440657  118797 system_pods.go:61] "storage-provisioner" [c5bbb77a-fba5-4683-be08-22021d7600b8] Running
	I1101 11:18:48.440673  118797 system_pods.go:74] duration metric: took 4.964747ms to wait for pod list to return data ...
	I1101 11:18:48.440683  118797 default_sa.go:34] waiting for default service account to be created ...
	I1101 11:18:48.445681  118797 default_sa.go:45] found service account: "default"
	I1101 11:18:48.445764  118797 default_sa.go:55] duration metric: took 5.073234ms for default service account to be created ...
	I1101 11:18:48.445778  118797 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 11:18:48.451443  118797 system_pods.go:86] 8 kube-system pods found
	I1101 11:18:48.451742  118797 system_pods.go:89] "coredns-66bc5c9577-w7cfg" [c0f904f6-44f6-4996-92dc-3fb6a537f96c] Running
	I1101 11:18:48.451766  118797 system_pods.go:89] "etcd-embed-certs-571864" [770ba541-6fe5-4e10-84d7-ecf8f6d626f8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 11:18:48.451774  118797 system_pods.go:89] "kube-apiserver-embed-certs-571864" [c9e8f5fd-436e-48aa-b2b2-f9a9564f2279] Running
	I1101 11:18:48.451781  118797 system_pods.go:89] "kube-controller-manager-embed-certs-571864" [2356aebd-c6e3-40e5-a125-b436db7c3a48] Running
	I1101 11:18:48.451787  118797 system_pods.go:89] "kube-proxy-6ddph" [50935e47-809d-4324-8200-148a11692fa8] Running
	I1101 11:18:48.451798  118797 system_pods.go:89] "kube-scheduler-embed-certs-571864" [11e5224c-7c54-489f-8396-283ed5892ff9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 11:18:48.451806  118797 system_pods.go:89] "metrics-server-746fcd58dc-8xq94" [319dd232-8ff5-4e8c-bb5a-c165604476c9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 11:18:48.451811  118797 system_pods.go:89] "storage-provisioner" [c5bbb77a-fba5-4683-be08-22021d7600b8] Running
	I1101 11:18:48.451823  118797 system_pods.go:126] duration metric: took 6.036564ms to wait for k8s-apps to be running ...
	I1101 11:18:48.451832  118797 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 11:18:48.451887  118797 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:18:48.531966  118797 system_svc.go:56] duration metric: took 80.113291ms WaitForService to wait for kubelet
	I1101 11:18:48.532000  118797 kubeadm.go:587] duration metric: took 9.02673999s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 11:18:48.532023  118797 node_conditions.go:102] verifying NodePressure condition ...
	I1101 11:18:48.540985  118797 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 11:18:48.541046  118797 node_conditions.go:123] node cpu capacity is 2
	I1101 11:18:48.541060  118797 node_conditions.go:105] duration metric: took 9.030982ms to run NodePressure ...
	I1101 11:18:48.541076  118797 start.go:242] waiting for startup goroutines ...
	I1101 11:18:48.541086  118797 start.go:247] waiting for cluster config update ...
	I1101 11:18:48.541166  118797 start.go:256] writing updated cluster config ...
	I1101 11:18:48.541638  118797 ssh_runner.go:195] Run: rm -f paused
	I1101 11:18:48.560819  118797 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 11:18:48.573978  118797 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w7cfg" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:18:48.590064  118797 pod_ready.go:94] pod "coredns-66bc5c9577-w7cfg" is "Ready"
	I1101 11:18:48.590101  118797 pod_ready.go:86] duration metric: took 16.092377ms for pod "coredns-66bc5c9577-w7cfg" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:18:48.597991  118797 pod_ready.go:83] waiting for pod "etcd-embed-certs-571864" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:18:49.611561  118797 pod_ready.go:94] pod "etcd-embed-certs-571864" is "Ready"
	I1101 11:18:49.611605  118797 pod_ready.go:86] duration metric: took 1.013583664s for pod "etcd-embed-certs-571864" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:18:49.619039  118797 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-571864" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:18:49.642201  118797 pod_ready.go:94] pod "kube-apiserver-embed-certs-571864" is "Ready"
	I1101 11:18:49.642241  118797 pod_ready.go:86] duration metric: took 23.165447ms for pod "kube-apiserver-embed-certs-571864" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:18:49.646543  118797 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-571864" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:18:49.767271  118797 pod_ready.go:94] pod "kube-controller-manager-embed-certs-571864" is "Ready"
	I1101 11:18:49.767304  118797 pod_ready.go:86] duration metric: took 120.732816ms for pod "kube-controller-manager-embed-certs-571864" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:18:49.968010  118797 pod_ready.go:83] waiting for pod "kube-proxy-6ddph" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:18:45.658762  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:45.659437  119092 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3a:46", ip: ""} in network mk-default-k8s-diff-port-287419: {Iface:virbr4 ExpiryTime:2025-11-01 12:18:38 +0000 UTC Type:0 Mac:52:54:00:8d:3a:46 Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:default-k8s-diff-port-287419 Clientid:01:52:54:00:8d:3a:46}
	I1101 11:18:45.659479  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined IP address 192.168.72.189 and MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:45.659778  119092 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1101 11:18:45.666180  119092 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 11:18:45.685523  119092 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-287419 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.34.1 ClusterName:default-k8s-diff-port-287419 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.189 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Netwo
rk: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 11:18:45.685726  119092 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 11:18:45.685806  119092 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 11:18:45.739874  119092 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1101 11:18:45.739973  119092 ssh_runner.go:195] Run: which lz4
	I1101 11:18:45.745645  119092 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1101 11:18:45.753480  119092 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 11:18:45.753514  119092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1101 11:18:47.890147  119092 crio.go:462] duration metric: took 2.144617755s to copy over tarball
	I1101 11:18:47.890300  119092 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 11:18:50.155390  119092 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.265050771s)
	I1101 11:18:50.155433  119092 crio.go:469] duration metric: took 2.265246579s to extract the tarball
	I1101 11:18:50.155443  119092 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 11:18:50.204230  119092 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 11:18:50.258185  119092 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 11:18:50.258221  119092 cache_images.go:86] Images are preloaded, skipping loading
	I1101 11:18:50.258234  119092 kubeadm.go:935] updating node { 192.168.72.189 8444 v1.34.1 crio true true} ...
	I1101 11:18:50.258391  119092 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-287419 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.189
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-287419 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 11:18:50.258493  119092 ssh_runner.go:195] Run: crio config
	I1101 11:18:48.248827  119309 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.83.241:22: connect: no route to host
	I1101 11:18:50.367021  118797 pod_ready.go:94] pod "kube-proxy-6ddph" is "Ready"
	I1101 11:18:50.367054  118797 pod_ready.go:86] duration metric: took 399.012072ms for pod "kube-proxy-6ddph" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:18:50.567216  118797 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-571864" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:18:52.221415  118797 pod_ready.go:94] pod "kube-scheduler-embed-certs-571864" is "Ready"
	I1101 11:18:52.221448  118797 pod_ready.go:86] duration metric: took 1.654197202s for pod "kube-scheduler-embed-certs-571864" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:18:52.221463  118797 pod_ready.go:40] duration metric: took 3.660600674s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 11:18:52.276290  118797 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 11:18:52.296690  118797 out.go:179] * Done! kubectl is now configured to use "embed-certs-571864" cluster and "default" namespace by default
	I1101 11:18:50.323593  119092 cni.go:84] Creating CNI manager for ""
	I1101 11:18:50.323631  119092 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 11:18:50.323663  119092 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 11:18:50.323698  119092 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.189 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-287419 NodeName:default-k8s-diff-port-287419 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.189"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.189 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 11:18:50.323866  119092 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.189
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-287419"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.189"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.189"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 11:18:50.323933  119092 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 11:18:50.338036  119092 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 11:18:50.338130  119092 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 11:18:50.351395  119092 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1101 11:18:50.378165  119092 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 11:18:50.404831  119092 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2231 bytes)
	I1101 11:18:50.432730  119092 ssh_runner.go:195] Run: grep 192.168.72.189	control-plane.minikube.internal$ /etc/hosts
	I1101 11:18:50.438076  119092 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.189	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 11:18:50.458245  119092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:18:50.653121  119092 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:18:50.682404  119092 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/default-k8s-diff-port-287419 for IP: 192.168.72.189
	I1101 11:18:50.682436  119092 certs.go:195] generating shared ca certs ...
	I1101 11:18:50.682464  119092 certs.go:227] acquiring lock for ca certs: {Name:mk20731b316fbc22c351241cefc40924880eeba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:18:50.682663  119092 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-70113/.minikube/ca.key
	I1101 11:18:50.682720  119092 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.key
	I1101 11:18:50.682733  119092 certs.go:257] generating profile certs ...
	I1101 11:18:50.682880  119092 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/default-k8s-diff-port-287419/client.key
	I1101 11:18:50.682981  119092 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/default-k8s-diff-port-287419/apiserver.key.f27f6a30
	I1101 11:18:50.683040  119092 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/default-k8s-diff-port-287419/proxy-client.key
	I1101 11:18:50.683213  119092 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/73998.pem (1338 bytes)
	W1101 11:18:50.683253  119092 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-70113/.minikube/certs/73998_empty.pem, impossibly tiny 0 bytes
	I1101 11:18:50.683263  119092 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 11:18:50.683293  119092 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem (1082 bytes)
	I1101 11:18:50.683319  119092 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/cert.pem (1123 bytes)
	I1101 11:18:50.683346  119092 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/key.pem (1675 bytes)
	I1101 11:18:50.683397  119092 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/files/etc/ssl/certs/739982.pem (1708 bytes)
	I1101 11:18:50.684304  119092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 11:18:50.770464  119092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 11:18:50.826170  119092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 11:18:50.865353  119092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 11:18:50.903837  119092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/default-k8s-diff-port-287419/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1101 11:18:50.939547  119092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/default-k8s-diff-port-287419/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 11:18:50.979273  119092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/default-k8s-diff-port-287419/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 11:18:51.014443  119092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/default-k8s-diff-port-287419/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 11:18:51.052333  119092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/files/etc/ssl/certs/739982.pem --> /usr/share/ca-certificates/739982.pem (1708 bytes)
	I1101 11:18:51.098653  119092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 11:18:51.142604  119092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/certs/73998.pem --> /usr/share/ca-certificates/73998.pem (1338 bytes)
	I1101 11:18:51.180582  119092 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 11:18:51.208143  119092 ssh_runner.go:195] Run: openssl version
	I1101 11:18:51.215212  119092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/73998.pem && ln -fs /usr/share/ca-certificates/73998.pem /etc/ssl/certs/73998.pem"
	I1101 11:18:51.231219  119092 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/73998.pem
	I1101 11:18:51.237181  119092 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:03 /usr/share/ca-certificates/73998.pem
	I1101 11:18:51.237262  119092 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/73998.pem
	I1101 11:18:51.245877  119092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/73998.pem /etc/ssl/certs/51391683.0"
	I1101 11:18:51.261634  119092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/739982.pem && ln -fs /usr/share/ca-certificates/739982.pem /etc/ssl/certs/739982.pem"
	I1101 11:18:51.276783  119092 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/739982.pem
	I1101 11:18:51.284238  119092 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:03 /usr/share/ca-certificates/739982.pem
	I1101 11:18:51.284312  119092 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/739982.pem
	I1101 11:18:51.295293  119092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/739982.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 11:18:51.311150  119092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 11:18:51.328773  119092 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:18:51.336579  119092 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:50 /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:18:51.336652  119092 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:18:51.344957  119092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 11:18:51.359860  119092 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 11:18:51.366325  119092 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 11:18:51.375090  119092 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 11:18:51.384626  119092 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 11:18:51.392868  119092 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 11:18:51.402460  119092 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 11:18:51.411353  119092 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 11:18:51.420451  119092 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-287419 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.34.1 ClusterName:default-k8s-diff-port-287419 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.189 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:
Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:18:51.420580  119092 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 11:18:51.420645  119092 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 11:18:51.498981  119092 cri.go:89] found id: ""
	I1101 11:18:51.499063  119092 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 11:18:51.521648  119092 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 11:18:51.521678  119092 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 11:18:51.521743  119092 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 11:18:51.545925  119092 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 11:18:51.547118  119092 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-287419" does not appear in /home/jenkins/minikube-integration/21830-70113/kubeconfig
	I1101 11:18:51.547926  119092 kubeconfig.go:62] /home/jenkins/minikube-integration/21830-70113/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-287419" cluster setting kubeconfig missing "default-k8s-diff-port-287419" context setting]
	I1101 11:18:51.549046  119092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/kubeconfig: {Name:mk1fa0677a0758214359bfdd6f326495ee5fd60e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:18:51.551181  119092 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 11:18:51.569605  119092 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.72.189
	I1101 11:18:51.569656  119092 kubeadm.go:1161] stopping kube-system containers ...
	I1101 11:18:51.569674  119092 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1101 11:18:51.569742  119092 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 11:18:51.615711  119092 cri.go:89] found id: ""
	I1101 11:18:51.615786  119092 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 11:18:51.637779  119092 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 11:18:51.651841  119092 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 11:18:51.651866  119092 kubeadm.go:158] found existing configuration files:
	
	I1101 11:18:51.651925  119092 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1101 11:18:51.664205  119092 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 11:18:51.664267  119092 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 11:18:51.677582  119092 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1101 11:18:51.692735  119092 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 11:18:51.692837  119092 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 11:18:51.707180  119092 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1101 11:18:51.719498  119092 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 11:18:51.719581  119092 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 11:18:51.732153  119092 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1101 11:18:51.744913  119092 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 11:18:51.744989  119092 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 11:18:51.759367  119092 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 11:18:51.774247  119092 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:18:51.853357  119092 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:18:53.362889  119092 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.509481634s)
	I1101 11:18:53.362994  119092 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:18:53.731950  119092 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:18:53.866848  119092 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:18:54.001010  119092 api_server.go:52] waiting for apiserver process to appear ...
	I1101 11:18:54.001129  119092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:18:54.501589  119092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:18:55.001870  119092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:18:54.329849  119309 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.83.241:22: connect: no route to host
	I1101 11:18:55.501249  119092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:18:55.566894  119092 api_server.go:72] duration metric: took 1.565913808s to wait for apiserver process to appear ...
	I1101 11:18:55.566931  119092 api_server.go:88] waiting for apiserver healthz status ...
	I1101 11:18:55.566973  119092 api_server.go:253] Checking apiserver healthz at https://192.168.72.189:8444/healthz ...
	I1101 11:18:55.568053  119092 api_server.go:269] stopped: https://192.168.72.189:8444/healthz: Get "https://192.168.72.189:8444/healthz": dial tcp 192.168.72.189:8444: connect: connection refused
	I1101 11:18:56.067770  119092 api_server.go:253] Checking apiserver healthz at https://192.168.72.189:8444/healthz ...
	I1101 11:18:59.493243  119092 api_server.go:279] https://192.168.72.189:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 11:18:59.493281  119092 api_server.go:103] status: https://192.168.72.189:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 11:18:59.493300  119092 api_server.go:253] Checking apiserver healthz at https://192.168.72.189:8444/healthz ...
	I1101 11:18:59.546941  119092 api_server.go:279] https://192.168.72.189:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 11:18:59.546974  119092 api_server.go:103] status: https://192.168.72.189:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 11:18:59.567134  119092 api_server.go:253] Checking apiserver healthz at https://192.168.72.189:8444/healthz ...
	I1101 11:18:59.582757  119092 api_server.go:279] https://192.168.72.189:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:18:59.582800  119092 api_server.go:103] status: https://192.168.72.189:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:19:00.067142  119092 api_server.go:253] Checking apiserver healthz at https://192.168.72.189:8444/healthz ...
	I1101 11:19:00.077874  119092 api_server.go:279] https://192.168.72.189:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:19:00.077907  119092 api_server.go:103] status: https://192.168.72.189:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:18:59.386401  119309 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.83.241:22: connect: connection refused
	I1101 11:19:00.570635  119092 api_server.go:253] Checking apiserver healthz at https://192.168.72.189:8444/healthz ...
	I1101 11:19:00.599718  119092 api_server.go:279] https://192.168.72.189:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:19:00.599754  119092 api_server.go:103] status: https://192.168.72.189:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:19:01.067678  119092 api_server.go:253] Checking apiserver healthz at https://192.168.72.189:8444/healthz ...
	I1101 11:19:01.083030  119092 api_server.go:279] https://192.168.72.189:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:19:01.083068  119092 api_server.go:103] status: https://192.168.72.189:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:19:01.567897  119092 api_server.go:253] Checking apiserver healthz at https://192.168.72.189:8444/healthz ...
	I1101 11:19:01.580905  119092 api_server.go:279] https://192.168.72.189:8444/healthz returned 200:
	ok
	I1101 11:19:01.600341  119092 api_server.go:141] control plane version: v1.34.1
	I1101 11:19:01.600375  119092 api_server.go:131] duration metric: took 6.033436041s to wait for apiserver health ...
	I1101 11:19:01.600388  119092 cni.go:84] Creating CNI manager for ""
	I1101 11:19:01.600396  119092 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 11:19:01.602421  119092 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 11:19:01.603598  119092 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 11:19:01.627377  119092 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1101 11:19:01.669719  119092 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 11:19:01.694977  119092 system_pods.go:59] 8 kube-system pods found
	I1101 11:19:01.695033  119092 system_pods.go:61] "coredns-66bc5c9577-drlhc" [2fe001ab-c59d-4a12-9897-d7d2869a1af8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 11:19:01.695054  119092 system_pods.go:61] "etcd-default-k8s-diff-port-287419" [67bd5955-ba6e-4d48-a952-857e719ddcb6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 11:19:01.695067  119092 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-287419" [c8154e49-5eed-4825-b594-e588075878ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 11:19:01.695078  119092 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-287419" [02a42753-0962-4d25-b898-43759f929c36] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 11:19:01.695091  119092 system_pods.go:61] "kube-proxy-lhjdx" [63b7c2eb-cdb2-4318-bef4-e95e3e478fb6] Running
	I1101 11:19:01.695100  119092 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-287419" [49ee2304-24ca-4a26-8b1c-9f59d8281dea] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 11:19:01.695112  119092 system_pods.go:61] "metrics-server-746fcd58dc-zmbnr" [ffa3dd51-bf02-44da-800d-f8d714bc1b36] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 11:19:01.695120  119092 system_pods.go:61] "storage-provisioner" [4a047ac3-d0c4-448e-8066-5a3ccd78fcc1] Running
	I1101 11:19:01.695129  119092 system_pods.go:74] duration metric: took 25.383583ms to wait for pod list to return data ...
	I1101 11:19:01.695141  119092 node_conditions.go:102] verifying NodePressure condition ...
	I1101 11:19:01.700190  119092 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 11:19:01.700224  119092 node_conditions.go:123] node cpu capacity is 2
	I1101 11:19:01.700238  119092 node_conditions.go:105] duration metric: took 5.091601ms to run NodePressure ...
	I1101 11:19:01.700308  119092 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:19:02.239500  119092 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1101 11:19:02.244469  119092 kubeadm.go:744] kubelet initialised
	I1101 11:19:02.244497  119092 kubeadm.go:745] duration metric: took 4.968663ms waiting for restarted kubelet to initialise ...
	I1101 11:19:02.244518  119092 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 11:19:02.279266  119092 ops.go:34] apiserver oom_adj: -16
	I1101 11:19:02.279294  119092 kubeadm.go:602] duration metric: took 10.757607601s to restartPrimaryControlPlane
	I1101 11:19:02.279306  119092 kubeadm.go:403] duration metric: took 10.85886702s to StartCluster
	I1101 11:19:02.279324  119092 settings.go:142] acquiring lock: {Name:mk26e3d3b2448df59827bb1be60cde1d117dbc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:19:02.279410  119092 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-70113/kubeconfig
	I1101 11:19:02.281069  119092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/kubeconfig: {Name:mk1fa0677a0758214359bfdd6f326495ee5fd60e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:19:02.281465  119092 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.72.189 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 11:19:02.281566  119092 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 11:19:02.281667  119092 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-287419"
	I1101 11:19:02.281687  119092 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-287419"
	W1101 11:19:02.281695  119092 addons.go:248] addon storage-provisioner should already be in state true
	I1101 11:19:02.281724  119092 host.go:66] Checking if "default-k8s-diff-port-287419" exists ...
	I1101 11:19:02.281766  119092 config.go:182] Loaded profile config "default-k8s-diff-port-287419": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:19:02.281828  119092 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-287419"
	I1101 11:19:02.281853  119092 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-287419"
	I1101 11:19:02.282446  119092 addons.go:70] Setting metrics-server=true in profile "default-k8s-diff-port-287419"
	I1101 11:19:02.282468  119092 addons.go:239] Setting addon metrics-server=true in "default-k8s-diff-port-287419"
	W1101 11:19:02.282476  119092 addons.go:248] addon metrics-server should already be in state true
	I1101 11:19:02.282485  119092 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-287419"
	I1101 11:19:02.282503  119092 host.go:66] Checking if "default-k8s-diff-port-287419" exists ...
	I1101 11:19:02.282508  119092 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-287419"
	W1101 11:19:02.282518  119092 addons.go:248] addon dashboard should already be in state true
	I1101 11:19:02.282572  119092 host.go:66] Checking if "default-k8s-diff-port-287419" exists ...
	I1101 11:19:02.284444  119092 out.go:179] * Verifying Kubernetes components...
	I1101 11:19:02.285933  119092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:19:02.287770  119092 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-287419"
	W1101 11:19:02.287795  119092 addons.go:248] addon default-storageclass should already be in state true
	I1101 11:19:02.287823  119092 host.go:66] Checking if "default-k8s-diff-port-287419" exists ...
	I1101 11:19:02.288749  119092 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1101 11:19:02.288756  119092 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 11:19:02.289642  119092 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 11:19:02.290051  119092 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 11:19:02.290075  119092 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 11:19:02.290704  119092 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 11:19:02.290726  119092 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 11:19:02.290910  119092 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 11:19:02.290920  119092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 11:19:02.292097  119092 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 11:19:02.293195  119092 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 11:19:02.293214  119092 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 11:19:02.296074  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:19:02.296424  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:19:02.297057  119092 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3a:46", ip: ""} in network mk-default-k8s-diff-port-287419: {Iface:virbr4 ExpiryTime:2025-11-01 12:18:38 +0000 UTC Type:0 Mac:52:54:00:8d:3a:46 Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:default-k8s-diff-port-287419 Clientid:01:52:54:00:8d:3a:46}
	I1101 11:19:02.297090  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined IP address 192.168.72.189 and MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:19:02.297189  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:19:02.297637  119092 sshutil.go:53] new ssh client: &{IP:192.168.72.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/default-k8s-diff-port-287419/id_rsa Username:docker}
	I1101 11:19:02.298076  119092 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3a:46", ip: ""} in network mk-default-k8s-diff-port-287419: {Iface:virbr4 ExpiryTime:2025-11-01 12:18:38 +0000 UTC Type:0 Mac:52:54:00:8d:3a:46 Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:default-k8s-diff-port-287419 Clientid:01:52:54:00:8d:3a:46}
	I1101 11:19:02.298114  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined IP address 192.168.72.189 and MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:19:02.298222  119092 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3a:46", ip: ""} in network mk-default-k8s-diff-port-287419: {Iface:virbr4 ExpiryTime:2025-11-01 12:18:38 +0000 UTC Type:0 Mac:52:54:00:8d:3a:46 Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:default-k8s-diff-port-287419 Clientid:01:52:54:00:8d:3a:46}
	I1101 11:19:02.298254  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined IP address 192.168.72.189 and MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:19:02.298359  119092 sshutil.go:53] new ssh client: &{IP:192.168.72.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/default-k8s-diff-port-287419/id_rsa Username:docker}
	I1101 11:19:02.298820  119092 sshutil.go:53] new ssh client: &{IP:192.168.72.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/default-k8s-diff-port-287419/id_rsa Username:docker}
	I1101 11:19:02.299984  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:19:02.300478  119092 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3a:46", ip: ""} in network mk-default-k8s-diff-port-287419: {Iface:virbr4 ExpiryTime:2025-11-01 12:18:38 +0000 UTC Type:0 Mac:52:54:00:8d:3a:46 Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:default-k8s-diff-port-287419 Clientid:01:52:54:00:8d:3a:46}
	I1101 11:19:02.300519  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined IP address 192.168.72.189 and MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:19:02.300712  119092 sshutil.go:53] new ssh client: &{IP:192.168.72.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/default-k8s-diff-port-287419/id_rsa Username:docker}
	I1101 11:19:02.661574  119092 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:19:02.693236  119092 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-287419" to be "Ready" ...
	I1101 11:19:02.945197  119092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 11:19:02.956238  119092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 11:19:02.974795  119092 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 11:19:02.974832  119092 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 11:19:02.990318  119092 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 11:19:02.990363  119092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1101 11:19:03.107005  119092 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 11:19:03.107035  119092 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 11:19:03.108601  119092 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 11:19:03.108634  119092 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 11:19:03.248974  119092 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 11:19:03.249206  119092 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 11:19:03.250386  119092 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 11:19:03.250403  119092 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 11:19:03.380890  119092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 11:19:03.380891  119092 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 11:19:03.381051  119092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 11:19:03.493547  119092 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 11:19:03.493574  119092 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 11:19:03.614503  119092 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 11:19:03.614527  119092 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 11:19:03.712290  119092 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 11:19:03.712319  119092 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 11:19:03.761853  119092 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 11:19:03.761878  119092 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 11:19:03.844380  119092 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 11:19:03.844416  119092 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 11:19:03.915138  119092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1101 11:19:04.698774  119092 node_ready.go:57] node "default-k8s-diff-port-287419" has "Ready":"False" status (will retry)
	I1101 11:19:05.727577  119092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.771302612s)
	I1101 11:19:05.727646  119092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.346638304s)
	I1101 11:19:05.727664  119092 addons.go:480] Verifying addon metrics-server=true in "default-k8s-diff-port-287419"
	I1101 11:19:05.890894  119092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.975640964s)
	I1101 11:19:05.892383  119092 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-287419 addons enable metrics-server
	
	I1101 11:19:05.893977  119092 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1101 11:19:02.543216  119309 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 11:19:02.550967  119309 main.go:143] libmachine: domain newest-cni-268638 has defined MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:02.552473  119309 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:62:b8:3b", ip: ""} in network mk-newest-cni-268638: {Iface:virbr5 ExpiryTime:2025-11-01 12:18:59 +0000 UTC Type:0 Mac:52:54:00:62:b8:3b Iaid: IPaddr:192.168.83.241 Prefix:24 Hostname:newest-cni-268638 Clientid:01:52:54:00:62:b8:3b}
	I1101 11:19:02.552512  119309 main.go:143] libmachine: domain newest-cni-268638 has defined IP address 192.168.83.241 and MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:02.553123  119309 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/newest-cni-268638/config.json ...
	I1101 11:19:02.553465  119309 machine.go:94] provisionDockerMachine start ...
	I1101 11:19:02.559146  119309 main.go:143] libmachine: domain newest-cni-268638 has defined MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:02.560029  119309 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:62:b8:3b", ip: ""} in network mk-newest-cni-268638: {Iface:virbr5 ExpiryTime:2025-11-01 12:18:59 +0000 UTC Type:0 Mac:52:54:00:62:b8:3b Iaid: IPaddr:192.168.83.241 Prefix:24 Hostname:newest-cni-268638 Clientid:01:52:54:00:62:b8:3b}
	I1101 11:19:02.560229  119309 main.go:143] libmachine: domain newest-cni-268638 has defined IP address 192.168.83.241 and MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:02.560723  119309 main.go:143] libmachine: Using SSH client type: native
	I1101 11:19:02.561035  119309 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.83.241 22 <nil> <nil>}
	I1101 11:19:02.561095  119309 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 11:19:02.694782  119309 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1101 11:19:02.694814  119309 buildroot.go:166] provisioning hostname "newest-cni-268638"
	I1101 11:19:02.700708  119309 main.go:143] libmachine: domain newest-cni-268638 has defined MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:02.701376  119309 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:62:b8:3b", ip: ""} in network mk-newest-cni-268638: {Iface:virbr5 ExpiryTime:2025-11-01 12:18:59 +0000 UTC Type:0 Mac:52:54:00:62:b8:3b Iaid: IPaddr:192.168.83.241 Prefix:24 Hostname:newest-cni-268638 Clientid:01:52:54:00:62:b8:3b}
	I1101 11:19:02.701434  119309 main.go:143] libmachine: domain newest-cni-268638 has defined IP address 192.168.83.241 and MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:02.701767  119309 main.go:143] libmachine: Using SSH client type: native
	I1101 11:19:02.702072  119309 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.83.241 22 <nil> <nil>}
	I1101 11:19:02.702097  119309 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-268638 && echo "newest-cni-268638" | sudo tee /etc/hostname
	I1101 11:19:02.849185  119309 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-268638
	
	I1101 11:19:02.855961  119309 main.go:143] libmachine: domain newest-cni-268638 has defined MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:02.856674  119309 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:62:b8:3b", ip: ""} in network mk-newest-cni-268638: {Iface:virbr5 ExpiryTime:2025-11-01 12:18:59 +0000 UTC Type:0 Mac:52:54:00:62:b8:3b Iaid: IPaddr:192.168.83.241 Prefix:24 Hostname:newest-cni-268638 Clientid:01:52:54:00:62:b8:3b}
	I1101 11:19:02.856715  119309 main.go:143] libmachine: domain newest-cni-268638 has defined IP address 192.168.83.241 and MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:02.856972  119309 main.go:143] libmachine: Using SSH client type: native
	I1101 11:19:02.857305  119309 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.83.241 22 <nil> <nil>}
	I1101 11:19:02.857332  119309 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-268638' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-268638/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-268638' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 11:19:03.000592  119309 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 11:19:03.000631  119309 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21830-70113/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-70113/.minikube}
	I1101 11:19:03.000666  119309 buildroot.go:174] setting up certificates
	I1101 11:19:03.000687  119309 provision.go:84] configureAuth start
	I1101 11:19:03.005966  119309 main.go:143] libmachine: domain newest-cni-268638 has defined MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:03.137583  119309 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:62:b8:3b", ip: ""} in network mk-newest-cni-268638: {Iface:virbr5 ExpiryTime:2025-11-01 12:18:59 +0000 UTC Type:0 Mac:52:54:00:62:b8:3b Iaid: IPaddr:192.168.83.241 Prefix:24 Hostname:newest-cni-268638 Clientid:01:52:54:00:62:b8:3b}
	I1101 11:19:03.137648  119309 main.go:143] libmachine: domain newest-cni-268638 has defined IP address 192.168.83.241 and MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:03.142322  119309 main.go:143] libmachine: domain newest-cni-268638 has defined MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:03.142942  119309 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:62:b8:3b", ip: ""} in network mk-newest-cni-268638: {Iface:virbr5 ExpiryTime:2025-11-01 12:18:59 +0000 UTC Type:0 Mac:52:54:00:62:b8:3b Iaid: IPaddr:192.168.83.241 Prefix:24 Hostname:newest-cni-268638 Clientid:01:52:54:00:62:b8:3b}
	I1101 11:19:03.142985  119309 main.go:143] libmachine: domain newest-cni-268638 has defined IP address 192.168.83.241 and MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:03.143150  119309 provision.go:143] copyHostCerts
	I1101 11:19:03.143226  119309 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-70113/.minikube/ca.pem, removing ...
	I1101 11:19:03.143244  119309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-70113/.minikube/ca.pem
	I1101 11:19:03.143337  119309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-70113/.minikube/ca.pem (1082 bytes)
	I1101 11:19:03.143476  119309 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-70113/.minikube/cert.pem, removing ...
	I1101 11:19:03.143489  119309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-70113/.minikube/cert.pem
	I1101 11:19:03.143548  119309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-70113/.minikube/cert.pem (1123 bytes)
	I1101 11:19:03.143664  119309 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-70113/.minikube/key.pem, removing ...
	I1101 11:19:03.143678  119309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-70113/.minikube/key.pem
	I1101 11:19:03.143720  119309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-70113/.minikube/key.pem (1675 bytes)
	I1101 11:19:03.143824  119309 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-70113/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca-key.pem org=jenkins.newest-cni-268638 san=[127.0.0.1 192.168.83.241 localhost minikube newest-cni-268638]
	I1101 11:19:03.483327  119309 provision.go:177] copyRemoteCerts
	I1101 11:19:03.483390  119309 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 11:19:03.487133  119309 main.go:143] libmachine: domain newest-cni-268638 has defined MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:03.487719  119309 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:62:b8:3b", ip: ""} in network mk-newest-cni-268638: {Iface:virbr5 ExpiryTime:2025-11-01 12:18:59 +0000 UTC Type:0 Mac:52:54:00:62:b8:3b Iaid: IPaddr:192.168.83.241 Prefix:24 Hostname:newest-cni-268638 Clientid:01:52:54:00:62:b8:3b}
	I1101 11:19:03.487748  119309 main.go:143] libmachine: domain newest-cni-268638 has defined IP address 192.168.83.241 and MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:03.487932  119309 sshutil.go:53] new ssh client: &{IP:192.168.83.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/newest-cni-268638/id_rsa Username:docker}
	I1101 11:19:03.584204  119309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 11:19:03.628717  119309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 11:19:03.677398  119309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 11:19:03.727214  119309 provision.go:87] duration metric: took 726.505982ms to configureAuth
	I1101 11:19:03.727250  119309 buildroot.go:189] setting minikube options for container-runtime
	I1101 11:19:03.727520  119309 config.go:182] Loaded profile config "newest-cni-268638": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:19:03.731945  119309 main.go:143] libmachine: domain newest-cni-268638 has defined MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:03.732435  119309 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:62:b8:3b", ip: ""} in network mk-newest-cni-268638: {Iface:virbr5 ExpiryTime:2025-11-01 12:18:59 +0000 UTC Type:0 Mac:52:54:00:62:b8:3b Iaid: IPaddr:192.168.83.241 Prefix:24 Hostname:newest-cni-268638 Clientid:01:52:54:00:62:b8:3b}
	I1101 11:19:03.732494  119309 main.go:143] libmachine: domain newest-cni-268638 has defined IP address 192.168.83.241 and MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:03.732930  119309 main.go:143] libmachine: Using SSH client type: native
	I1101 11:19:03.733251  119309 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.83.241 22 <nil> <nil>}
	I1101 11:19:03.733290  119309 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 11:19:04.069760  119309 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 11:19:04.069790  119309 machine.go:97] duration metric: took 1.516311361s to provisionDockerMachine
	I1101 11:19:04.069821  119309 start.go:293] postStartSetup for "newest-cni-268638" (driver="kvm2")
	I1101 11:19:04.069837  119309 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 11:19:04.069910  119309 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 11:19:04.073709  119309 main.go:143] libmachine: domain newest-cni-268638 has defined MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:04.074194  119309 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:62:b8:3b", ip: ""} in network mk-newest-cni-268638: {Iface:virbr5 ExpiryTime:2025-11-01 12:18:59 +0000 UTC Type:0 Mac:52:54:00:62:b8:3b Iaid: IPaddr:192.168.83.241 Prefix:24 Hostname:newest-cni-268638 Clientid:01:52:54:00:62:b8:3b}
	I1101 11:19:04.074226  119309 main.go:143] libmachine: domain newest-cni-268638 has defined IP address 192.168.83.241 and MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:04.074554  119309 sshutil.go:53] new ssh client: &{IP:192.168.83.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/newest-cni-268638/id_rsa Username:docker}
	I1101 11:19:04.182259  119309 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 11:19:04.190057  119309 info.go:137] Remote host: Buildroot 2025.02
	I1101 11:19:04.190106  119309 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-70113/.minikube/addons for local assets ...
	I1101 11:19:04.190205  119309 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-70113/.minikube/files for local assets ...
	I1101 11:19:04.190342  119309 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-70113/.minikube/files/etc/ssl/certs/739982.pem -> 739982.pem in /etc/ssl/certs
	I1101 11:19:04.190485  119309 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 11:19:04.210151  119309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/files/etc/ssl/certs/739982.pem --> /etc/ssl/certs/739982.pem (1708 bytes)
	I1101 11:19:04.256616  119309 start.go:296] duration metric: took 186.775542ms for postStartSetup
	I1101 11:19:04.256750  119309 fix.go:56] duration metric: took 20.838506313s for fixHost
	I1101 11:19:04.260280  119309 main.go:143] libmachine: domain newest-cni-268638 has defined MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:04.260754  119309 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:62:b8:3b", ip: ""} in network mk-newest-cni-268638: {Iface:virbr5 ExpiryTime:2025-11-01 12:18:59 +0000 UTC Type:0 Mac:52:54:00:62:b8:3b Iaid: IPaddr:192.168.83.241 Prefix:24 Hostname:newest-cni-268638 Clientid:01:52:54:00:62:b8:3b}
	I1101 11:19:04.260788  119309 main.go:143] libmachine: domain newest-cni-268638 has defined IP address 192.168.83.241 and MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:04.260992  119309 main.go:143] libmachine: Using SSH client type: native
	I1101 11:19:04.261266  119309 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.83.241 22 <nil> <nil>}
	I1101 11:19:04.261283  119309 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1101 11:19:04.389074  119309 main.go:143] libmachine: SSH cmd err, output: <nil>: 1761995944.352514696
	
	I1101 11:19:04.389102  119309 fix.go:216] guest clock: 1761995944.352514696
	I1101 11:19:04.389112  119309 fix.go:229] Guest: 2025-11-01 11:19:04.352514696 +0000 UTC Remote: 2025-11-01 11:19:04.256761907 +0000 UTC m=+37.752831701 (delta=95.752789ms)
	I1101 11:19:04.389135  119309 fix.go:200] guest clock delta is within tolerance: 95.752789ms
	I1101 11:19:04.389143  119309 start.go:83] releasing machines lock for "newest-cni-268638", held for 20.97092735s
	I1101 11:19:04.394244  119309 main.go:143] libmachine: domain newest-cni-268638 has defined MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:04.394978  119309 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:62:b8:3b", ip: ""} in network mk-newest-cni-268638: {Iface:virbr5 ExpiryTime:2025-11-01 12:18:59 +0000 UTC Type:0 Mac:52:54:00:62:b8:3b Iaid: IPaddr:192.168.83.241 Prefix:24 Hostname:newest-cni-268638 Clientid:01:52:54:00:62:b8:3b}
	I1101 11:19:04.395042  119309 main.go:143] libmachine: domain newest-cni-268638 has defined IP address 192.168.83.241 and MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:04.396018  119309 ssh_runner.go:195] Run: cat /version.json
	I1101 11:19:04.396716  119309 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 11:19:04.404825  119309 main.go:143] libmachine: domain newest-cni-268638 has defined MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:04.405620  119309 main.go:143] libmachine: domain newest-cni-268638 has defined MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:04.406188  119309 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:62:b8:3b", ip: ""} in network mk-newest-cni-268638: {Iface:virbr5 ExpiryTime:2025-11-01 12:18:59 +0000 UTC Type:0 Mac:52:54:00:62:b8:3b Iaid: IPaddr:192.168.83.241 Prefix:24 Hostname:newest-cni-268638 Clientid:01:52:54:00:62:b8:3b}
	I1101 11:19:04.406227  119309 main.go:143] libmachine: domain newest-cni-268638 has defined IP address 192.168.83.241 and MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:04.406424  119309 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:62:b8:3b", ip: ""} in network mk-newest-cni-268638: {Iface:virbr5 ExpiryTime:2025-11-01 12:18:59 +0000 UTC Type:0 Mac:52:54:00:62:b8:3b Iaid: IPaddr:192.168.83.241 Prefix:24 Hostname:newest-cni-268638 Clientid:01:52:54:00:62:b8:3b}
	I1101 11:19:04.406458  119309 main.go:143] libmachine: domain newest-cni-268638 has defined IP address 192.168.83.241 and MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:04.406849  119309 sshutil.go:53] new ssh client: &{IP:192.168.83.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/newest-cni-268638/id_rsa Username:docker}
	I1101 11:19:04.406949  119309 sshutil.go:53] new ssh client: &{IP:192.168.83.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/newest-cni-268638/id_rsa Username:docker}
	I1101 11:19:04.501378  119309 ssh_runner.go:195] Run: systemctl --version
	I1101 11:19:04.534937  119309 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 11:19:04.753637  119309 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 11:19:04.764918  119309 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 11:19:04.765041  119309 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 11:19:04.790985  119309 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 11:19:04.791020  119309 start.go:496] detecting cgroup driver to use...
	I1101 11:19:04.791109  119309 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 11:19:04.816639  119309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 11:19:04.837643  119309 docker.go:218] disabling cri-docker service (if available) ...
	I1101 11:19:04.837726  119309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 11:19:04.859128  119309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 11:19:04.880452  119309 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 11:19:05.073252  119309 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 11:19:05.345388  119309 docker.go:234] disabling docker service ...
	I1101 11:19:05.345473  119309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 11:19:05.371588  119309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 11:19:05.391751  119309 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 11:19:05.596336  119309 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 11:19:05.800163  119309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 11:19:05.826270  119309 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 11:19:05.863203  119309 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 11:19:05.863270  119309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:19:05.878405  119309 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 11:19:05.878475  119309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:19:05.894648  119309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:19:05.909477  119309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:19:05.925137  119309 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 11:19:05.946319  119309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:19:05.964326  119309 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:19:05.993100  119309 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:19:06.009398  119309 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 11:19:06.023888  119309 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 11:19:06.023972  119309 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 11:19:06.049290  119309 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 11:19:06.064426  119309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:19:06.214825  119309 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 11:19:06.349129  119309 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 11:19:06.349210  119309 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 11:19:06.355161  119309 start.go:564] Will wait 60s for crictl version
	I1101 11:19:06.355232  119309 ssh_runner.go:195] Run: which crictl
	I1101 11:19:06.359672  119309 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 11:19:06.407252  119309 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1101 11:19:06.407348  119309 ssh_runner.go:195] Run: crio --version
	I1101 11:19:06.443806  119309 ssh_runner.go:195] Run: crio --version
	I1101 11:19:06.483519  119309 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1101 11:19:06.487909  119309 main.go:143] libmachine: domain newest-cni-268638 has defined MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:06.488524  119309 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:62:b8:3b", ip: ""} in network mk-newest-cni-268638: {Iface:virbr5 ExpiryTime:2025-11-01 12:18:59 +0000 UTC Type:0 Mac:52:54:00:62:b8:3b Iaid: IPaddr:192.168.83.241 Prefix:24 Hostname:newest-cni-268638 Clientid:01:52:54:00:62:b8:3b}
	I1101 11:19:06.488588  119309 main.go:143] libmachine: domain newest-cni-268638 has defined IP address 192.168.83.241 and MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:06.488858  119309 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1101 11:19:06.494322  119309 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 11:19:06.515748  119309 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1101 11:19:06.517187  119309 kubeadm.go:884] updating cluster {Name:newest-cni-268638 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.34.1 ClusterName:newest-cni-268638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.241 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<
nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 11:19:06.517334  119309 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 11:19:06.517404  119309 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 11:19:06.569300  119309 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1101 11:19:06.569384  119309 ssh_runner.go:195] Run: which lz4
	I1101 11:19:06.575446  119309 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1101 11:19:05.895169  119092 addons.go:515] duration metric: took 3.613636476s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	W1101 11:19:07.199115  119092 node_ready.go:57] node "default-k8s-diff-port-287419" has "Ready":"False" status (will retry)
	W1101 11:19:09.697424  119092 node_ready.go:57] node "default-k8s-diff-port-287419" has "Ready":"False" status (will retry)
	I1101 11:19:10.203943  119092 node_ready.go:49] node "default-k8s-diff-port-287419" is "Ready"
	I1101 11:19:10.203984  119092 node_ready.go:38] duration metric: took 7.510699569s for node "default-k8s-diff-port-287419" to be "Ready" ...
	I1101 11:19:10.203999  119092 api_server.go:52] waiting for apiserver process to appear ...
	I1101 11:19:10.204057  119092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:19:10.243422  119092 api_server.go:72] duration metric: took 7.961877658s to wait for apiserver process to appear ...
	I1101 11:19:10.243453  119092 api_server.go:88] waiting for apiserver healthz status ...
	I1101 11:19:10.243478  119092 api_server.go:253] Checking apiserver healthz at https://192.168.72.189:8444/healthz ...
	I1101 11:19:10.255943  119092 api_server.go:279] https://192.168.72.189:8444/healthz returned 200:
	ok
	I1101 11:19:10.257571  119092 api_server.go:141] control plane version: v1.34.1
	I1101 11:19:10.257607  119092 api_server.go:131] duration metric: took 14.143902ms to wait for apiserver health ...
	I1101 11:19:10.257620  119092 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 11:19:10.262958  119092 system_pods.go:59] 8 kube-system pods found
	I1101 11:19:10.262997  119092 system_pods.go:61] "coredns-66bc5c9577-drlhc" [2fe001ab-c59d-4a12-9897-d7d2869a1af8] Running
	I1101 11:19:10.263005  119092 system_pods.go:61] "etcd-default-k8s-diff-port-287419" [67bd5955-ba6e-4d48-a952-857e719ddcb6] Running
	I1101 11:19:10.263016  119092 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-287419" [c8154e49-5eed-4825-b594-e588075878ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 11:19:10.263023  119092 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-287419" [02a42753-0962-4d25-b898-43759f929c36] Running
	I1101 11:19:10.263041  119092 system_pods.go:61] "kube-proxy-lhjdx" [63b7c2eb-cdb2-4318-bef4-e95e3e478fb6] Running
	I1101 11:19:10.263049  119092 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-287419" [49ee2304-24ca-4a26-8b1c-9f59d8281dea] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 11:19:10.263057  119092 system_pods.go:61] "metrics-server-746fcd58dc-zmbnr" [ffa3dd51-bf02-44da-800d-f8d714bc1b36] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 11:19:10.263083  119092 system_pods.go:61] "storage-provisioner" [4a047ac3-d0c4-448e-8066-5a3ccd78fcc1] Running
	I1101 11:19:10.263091  119092 system_pods.go:74] duration metric: took 5.462284ms to wait for pod list to return data ...
	I1101 11:19:10.263101  119092 default_sa.go:34] waiting for default service account to be created ...
	I1101 11:19:10.267391  119092 default_sa.go:45] found service account: "default"
	I1101 11:19:10.267574  119092 default_sa.go:55] duration metric: took 4.460174ms for default service account to be created ...
	I1101 11:19:10.267600  119092 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 11:19:06.581287  119309 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 11:19:06.581331  119309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1101 11:19:08.409367  119309 crio.go:462] duration metric: took 1.83395154s to copy over tarball
	I1101 11:19:08.409456  119309 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 11:19:10.402378  119309 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.992885247s)
	I1101 11:19:10.402419  119309 crio.go:469] duration metric: took 1.993018787s to extract the tarball
	I1101 11:19:10.402431  119309 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 11:19:10.449439  119309 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 11:19:10.505411  119309 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 11:19:10.505442  119309 cache_images.go:86] Images are preloaded, skipping loading
	I1101 11:19:10.505455  119309 kubeadm.go:935] updating node { 192.168.83.241 8443 v1.34.1 crio true true} ...
	I1101 11:19:10.505632  119309 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-268638 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.241
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-268638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 11:19:10.505740  119309 ssh_runner.go:195] Run: crio config
	I1101 11:19:10.565423  119309 cni.go:84] Creating CNI manager for ""
	I1101 11:19:10.565452  119309 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 11:19:10.565474  119309 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1101 11:19:10.565511  119309 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.83.241 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-268638 NodeName:newest-cni-268638 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.241"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.241 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 11:19:10.565743  119309 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.241
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-268638"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.83.241"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.241"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 11:19:10.565841  119309 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 11:19:10.579061  119309 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 11:19:10.579148  119309 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 11:19:10.598798  119309 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1101 11:19:10.629409  119309 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 11:19:10.654108  119309 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1101 11:19:10.678258  119309 ssh_runner.go:195] Run: grep 192.168.83.241	control-plane.minikube.internal$ /etc/hosts
	I1101 11:19:10.685115  119309 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.241	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 11:19:10.708632  119309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:19:10.869819  119309 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:19:10.895714  119309 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/newest-cni-268638 for IP: 192.168.83.241
	I1101 11:19:10.895744  119309 certs.go:195] generating shared ca certs ...
	I1101 11:19:10.895769  119309 certs.go:227] acquiring lock for ca certs: {Name:mk20731b316fbc22c351241cefc40924880eeba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:19:10.895939  119309 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-70113/.minikube/ca.key
	I1101 11:19:10.896003  119309 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.key
	I1101 11:19:10.896020  119309 certs.go:257] generating profile certs ...
	I1101 11:19:10.896175  119309 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/newest-cni-268638/client.key
	I1101 11:19:10.896257  119309 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/newest-cni-268638/apiserver.key.2629d584
	I1101 11:19:10.896306  119309 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/newest-cni-268638/proxy-client.key
	I1101 11:19:10.896465  119309 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/73998.pem (1338 bytes)
	W1101 11:19:10.896510  119309 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-70113/.minikube/certs/73998_empty.pem, impossibly tiny 0 bytes
	I1101 11:19:10.896522  119309 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 11:19:10.896572  119309 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem (1082 bytes)
	I1101 11:19:10.896604  119309 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/cert.pem (1123 bytes)
	I1101 11:19:10.896641  119309 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/key.pem (1675 bytes)
	I1101 11:19:10.896708  119309 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/files/etc/ssl/certs/739982.pem (1708 bytes)
	I1101 11:19:10.897339  119309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 11:19:10.956463  119309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 11:19:11.002135  119309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 11:19:11.038484  119309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 11:19:11.076307  119309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/newest-cni-268638/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 11:19:11.109072  119309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/newest-cni-268638/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 11:19:11.141404  119309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/newest-cni-268638/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 11:19:11.177071  119309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/newest-cni-268638/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 11:19:11.209770  119309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/certs/73998.pem --> /usr/share/ca-certificates/73998.pem (1338 bytes)
	I1101 11:19:11.243590  119309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/files/etc/ssl/certs/739982.pem --> /usr/share/ca-certificates/739982.pem (1708 bytes)
	I1101 11:19:11.281067  119309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 11:19:11.317422  119309 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 11:19:11.345658  119309 ssh_runner.go:195] Run: openssl version
	I1101 11:19:11.354713  119309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/73998.pem && ln -fs /usr/share/ca-certificates/73998.pem /etc/ssl/certs/73998.pem"
	I1101 11:19:11.370673  119309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/73998.pem
	I1101 11:19:11.377645  119309 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:03 /usr/share/ca-certificates/73998.pem
	I1101 11:19:11.377727  119309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/73998.pem
	I1101 11:19:11.387892  119309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/73998.pem /etc/ssl/certs/51391683.0"
	I1101 11:19:11.406129  119309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/739982.pem && ln -fs /usr/share/ca-certificates/739982.pem /etc/ssl/certs/739982.pem"
	I1101 11:19:11.422341  119309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/739982.pem
	I1101 11:19:11.428443  119309 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:03 /usr/share/ca-certificates/739982.pem
	I1101 11:19:11.428508  119309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/739982.pem
	I1101 11:19:11.436762  119309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/739982.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 11:19:11.452044  119309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 11:19:11.467992  119309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:19:11.474024  119309 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:50 /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:19:11.474101  119309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:19:11.483025  119309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 11:19:11.499055  119309 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 11:19:11.505134  119309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 11:19:11.514907  119309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 11:19:11.523957  119309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 11:19:11.532373  119309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 11:19:11.541455  119309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 11:19:11.550756  119309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 11:19:11.560282  119309 kubeadm.go:401] StartCluster: {Name:newest-cni-268638 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.1 ClusterName:newest-cni-268638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.241 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil
> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:19:11.560393  119309 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 11:19:11.560464  119309 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 11:19:10.273597  119092 system_pods.go:86] 8 kube-system pods found
	I1101 11:19:10.273630  119092 system_pods.go:89] "coredns-66bc5c9577-drlhc" [2fe001ab-c59d-4a12-9897-d7d2869a1af8] Running
	I1101 11:19:10.273638  119092 system_pods.go:89] "etcd-default-k8s-diff-port-287419" [67bd5955-ba6e-4d48-a952-857e719ddcb6] Running
	I1101 11:19:10.273650  119092 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-287419" [c8154e49-5eed-4825-b594-e588075878ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 11:19:10.273662  119092 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-287419" [02a42753-0962-4d25-b898-43759f929c36] Running
	I1101 11:19:10.273671  119092 system_pods.go:89] "kube-proxy-lhjdx" [63b7c2eb-cdb2-4318-bef4-e95e3e478fb6] Running
	I1101 11:19:10.273679  119092 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-287419" [49ee2304-24ca-4a26-8b1c-9f59d8281dea] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 11:19:10.273737  119092 system_pods.go:89] "metrics-server-746fcd58dc-zmbnr" [ffa3dd51-bf02-44da-800d-f8d714bc1b36] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 11:19:10.273749  119092 system_pods.go:89] "storage-provisioner" [4a047ac3-d0c4-448e-8066-5a3ccd78fcc1] Running
	I1101 11:19:10.273771  119092 system_pods.go:126] duration metric: took 6.161144ms to wait for k8s-apps to be running ...
	I1101 11:19:10.273783  119092 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 11:19:10.273846  119092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:19:10.302853  119092 system_svc.go:56] duration metric: took 29.056278ms WaitForService to wait for kubelet
	I1101 11:19:10.302889  119092 kubeadm.go:587] duration metric: took 8.021353572s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 11:19:10.302910  119092 node_conditions.go:102] verifying NodePressure condition ...
	I1101 11:19:10.308609  119092 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 11:19:10.308637  119092 node_conditions.go:123] node cpu capacity is 2
	I1101 11:19:10.308653  119092 node_conditions.go:105] duration metric: took 5.737557ms to run NodePressure ...
	I1101 11:19:10.308671  119092 start.go:242] waiting for startup goroutines ...
	I1101 11:19:10.308681  119092 start.go:247] waiting for cluster config update ...
	I1101 11:19:10.308695  119092 start.go:256] writing updated cluster config ...
	I1101 11:19:10.309102  119092 ssh_runner.go:195] Run: rm -f paused
	I1101 11:19:10.317997  119092 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 11:19:10.324619  119092 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-drlhc" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:19:10.334515  119092 pod_ready.go:94] pod "coredns-66bc5c9577-drlhc" is "Ready"
	I1101 11:19:10.334581  119092 pod_ready.go:86] duration metric: took 9.911389ms for pod "coredns-66bc5c9577-drlhc" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:19:10.339962  119092 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-287419" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:19:10.348425  119092 pod_ready.go:94] pod "etcd-default-k8s-diff-port-287419" is "Ready"
	I1101 11:19:10.348455  119092 pod_ready.go:86] duration metric: took 8.464953ms for pod "etcd-default-k8s-diff-port-287419" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:19:10.352948  119092 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-287419" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 11:19:12.445160  119092 pod_ready.go:104] pod "kube-apiserver-default-k8s-diff-port-287419" is not "Ready", error: <nil>
	I1101 11:19:13.294339  119092 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-287419" is "Ready"
	I1101 11:19:13.294377  119092 pod_ready.go:86] duration metric: took 2.941390708s for pod "kube-apiserver-default-k8s-diff-port-287419" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:19:13.298088  119092 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-287419" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:19:13.305389  119092 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-287419" is "Ready"
	I1101 11:19:13.305420  119092 pod_ready.go:86] duration metric: took 7.301513ms for pod "kube-controller-manager-default-k8s-diff-port-287419" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:19:13.310886  119092 pod_ready.go:83] waiting for pod "kube-proxy-lhjdx" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:19:13.329132  119092 pod_ready.go:94] pod "kube-proxy-lhjdx" is "Ready"
	I1101 11:19:13.329158  119092 pod_ready.go:86] duration metric: took 18.248938ms for pod "kube-proxy-lhjdx" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:19:13.529304  119092 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-287419" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:19:13.925193  119092 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-287419" is "Ready"
	I1101 11:19:13.925231  119092 pod_ready.go:86] duration metric: took 395.894846ms for pod "kube-scheduler-default-k8s-diff-port-287419" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:19:13.925249  119092 pod_ready.go:40] duration metric: took 3.607204823s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 11:19:13.973382  119092 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 11:19:13.975205  119092 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-287419" cluster and "default" namespace by default
	I1101 11:19:11.609666  119309 cri.go:89] found id: ""
	I1101 11:19:11.609741  119309 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 11:19:11.630306  119309 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 11:19:11.630327  119309 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 11:19:11.630375  119309 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 11:19:11.651352  119309 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 11:19:11.652218  119309 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-268638" does not appear in /home/jenkins/minikube-integration/21830-70113/kubeconfig
	I1101 11:19:11.652756  119309 kubeconfig.go:62] /home/jenkins/minikube-integration/21830-70113/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-268638" cluster setting kubeconfig missing "newest-cni-268638" context setting]
	I1101 11:19:11.653466  119309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/kubeconfig: {Name:mk1fa0677a0758214359bfdd6f326495ee5fd60e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:19:11.710978  119309 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 11:19:11.725584  119309 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.83.241
	I1101 11:19:11.725625  119309 kubeadm.go:1161] stopping kube-system containers ...
	I1101 11:19:11.725642  119309 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1101 11:19:11.725705  119309 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 11:19:11.772747  119309 cri.go:89] found id: ""
	I1101 11:19:11.772848  119309 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 11:19:11.795471  119309 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 11:19:11.808762  119309 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 11:19:11.808848  119309 kubeadm.go:158] found existing configuration files:
	
	I1101 11:19:11.808917  119309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 11:19:11.821235  119309 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 11:19:11.821307  119309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 11:19:11.835553  119309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 11:19:11.848021  119309 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 11:19:11.848115  119309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 11:19:11.863170  119309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 11:19:11.875313  119309 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 11:19:11.875380  119309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 11:19:11.890693  119309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 11:19:11.906182  119309 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 11:19:11.906256  119309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 11:19:11.919826  119309 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 11:19:11.934053  119309 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:19:12.015579  119309 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:19:14.342020  119309 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.326399831s)
	I1101 11:19:14.342088  119309 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:19:14.660586  119309 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:19:14.742015  119309 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:19:14.838918  119309 api_server.go:52] waiting for apiserver process to appear ...
	I1101 11:19:14.839004  119309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:19:15.339395  119309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:19:15.839460  119309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:19:16.340084  119309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:19:16.373447  119309 api_server.go:72] duration metric: took 1.53453739s to wait for apiserver process to appear ...
	I1101 11:19:16.373485  119309 api_server.go:88] waiting for apiserver healthz status ...
	I1101 11:19:16.373512  119309 api_server.go:253] Checking apiserver healthz at https://192.168.83.241:8443/healthz ...
	I1101 11:19:19.157737  119309 api_server.go:279] https://192.168.83.241:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 11:19:19.157765  119309 api_server.go:103] status: https://192.168.83.241:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 11:19:19.157779  119309 api_server.go:253] Checking apiserver healthz at https://192.168.83.241:8443/healthz ...
	I1101 11:19:19.342659  119309 api_server.go:279] https://192.168.83.241:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:19:19.342701  119309 api_server.go:103] status: https://192.168.83.241:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:19:19.374013  119309 api_server.go:253] Checking apiserver healthz at https://192.168.83.241:8443/healthz ...
	I1101 11:19:19.397242  119309 api_server.go:279] https://192.168.83.241:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:19:19.397282  119309 api_server.go:103] status: https://192.168.83.241:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:19:19.873795  119309 api_server.go:253] Checking apiserver healthz at https://192.168.83.241:8443/healthz ...
	I1101 11:19:19.880439  119309 api_server.go:279] https://192.168.83.241:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:19:19.880468  119309 api_server.go:103] status: https://192.168.83.241:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:19:20.373786  119309 api_server.go:253] Checking apiserver healthz at https://192.168.83.241:8443/healthz ...
	I1101 11:19:20.383394  119309 api_server.go:279] https://192.168.83.241:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:19:20.383440  119309 api_server.go:103] status: https://192.168.83.241:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:19:20.874090  119309 api_server.go:253] Checking apiserver healthz at https://192.168.83.241:8443/healthz ...
	I1101 11:19:20.886513  119309 api_server.go:279] https://192.168.83.241:8443/healthz returned 200:
	ok
	I1101 11:19:20.897302  119309 api_server.go:141] control plane version: v1.34.1
	I1101 11:19:20.897342  119309 api_server.go:131] duration metric: took 4.523847623s to wait for apiserver health ...
	I1101 11:19:20.897356  119309 cni.go:84] Creating CNI manager for ""
	I1101 11:19:20.897364  119309 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 11:19:20.899671  119309 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 11:19:20.901215  119309 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 11:19:20.917189  119309 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1101 11:19:20.967064  119309 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 11:19:20.973563  119309 system_pods.go:59] 8 kube-system pods found
	I1101 11:19:20.973612  119309 system_pods.go:61] "coredns-66bc5c9577-x5nfd" [acc63001-4d92-4ca1-ac5d-7a0e2c4a25a8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 11:19:20.973625  119309 system_pods.go:61] "etcd-newest-cni-268638" [b62e4b95-ef59-4654-9898-ba8e0fff3055] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 11:19:20.973637  119309 system_pods.go:61] "kube-apiserver-newest-cni-268638" [0bbccadf-5e63-497e-99de-df7df8aaf3d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 11:19:20.973651  119309 system_pods.go:61] "kube-controller-manager-newest-cni-268638" [ee7753cb-2e9a-4a99-bad6-c6f735170567] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 11:19:20.973662  119309 system_pods.go:61] "kube-proxy-p5ldr" [04b69050-8b02-418a-9872-92d2559f8b82] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 11:19:20.973678  119309 system_pods.go:61] "kube-scheduler-newest-cni-268638" [b1ebd08f-c186-4b93-8642-5f9acc2eef2c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 11:19:20.973693  119309 system_pods.go:61] "metrics-server-746fcd58dc-mv8ln" [59cbbd00-75e5-4542-97f4-c810a5533e4f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 11:19:20.973700  119309 system_pods.go:61] "storage-provisioner" [451e647d-8388-4461-a4a9-09b930bc3f87] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 11:19:20.973710  119309 system_pods.go:74] duration metric: took 6.623313ms to wait for pod list to return data ...
	I1101 11:19:20.973722  119309 node_conditions.go:102] verifying NodePressure condition ...
	I1101 11:19:20.978710  119309 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 11:19:20.978734  119309 node_conditions.go:123] node cpu capacity is 2
	I1101 11:19:20.978745  119309 node_conditions.go:105] duration metric: took 5.017939ms to run NodePressure ...
	I1101 11:19:20.978794  119309 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:19:21.345717  119309 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 11:19:21.364040  119309 ops.go:34] apiserver oom_adj: -16
	I1101 11:19:21.364072  119309 kubeadm.go:602] duration metric: took 9.73373589s to restartPrimaryControlPlane
	I1101 11:19:21.364088  119309 kubeadm.go:403] duration metric: took 9.803815673s to StartCluster
	I1101 11:19:21.364112  119309 settings.go:142] acquiring lock: {Name:mk26e3d3b2448df59827bb1be60cde1d117dbc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:19:21.364206  119309 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-70113/kubeconfig
	I1101 11:19:21.365717  119309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/kubeconfig: {Name:mk1fa0677a0758214359bfdd6f326495ee5fd60e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:19:21.366054  119309 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.83.241 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 11:19:21.366163  119309 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 11:19:21.366280  119309 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-268638"
	I1101 11:19:21.366314  119309 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-268638"
	W1101 11:19:21.366328  119309 addons.go:248] addon storage-provisioner should already be in state true
	I1101 11:19:21.366362  119309 host.go:66] Checking if "newest-cni-268638" exists ...
	I1101 11:19:21.366362  119309 addons.go:70] Setting default-storageclass=true in profile "newest-cni-268638"
	I1101 11:19:21.366386  119309 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-268638"
	I1101 11:19:21.366403  119309 addons.go:70] Setting metrics-server=true in profile "newest-cni-268638"
	I1101 11:19:21.366447  119309 addons.go:239] Setting addon metrics-server=true in "newest-cni-268638"
	W1101 11:19:21.366460  119309 addons.go:248] addon metrics-server should already be in state true
	I1101 11:19:21.366493  119309 host.go:66] Checking if "newest-cni-268638" exists ...
	I1101 11:19:21.366605  119309 addons.go:70] Setting dashboard=true in profile "newest-cni-268638"
	I1101 11:19:21.366647  119309 addons.go:239] Setting addon dashboard=true in "newest-cni-268638"
	W1101 11:19:21.366657  119309 addons.go:248] addon dashboard should already be in state true
	I1101 11:19:21.366689  119309 host.go:66] Checking if "newest-cni-268638" exists ...
	I1101 11:19:21.366414  119309 config.go:182] Loaded profile config "newest-cni-268638": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:19:21.368296  119309 out.go:179] * Verifying Kubernetes components...
	I1101 11:19:21.369941  119309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:19:21.370975  119309 addons.go:239] Setting addon default-storageclass=true in "newest-cni-268638"
	W1101 11:19:21.371000  119309 addons.go:248] addon default-storageclass should already be in state true
	I1101 11:19:21.371027  119309 host.go:66] Checking if "newest-cni-268638" exists ...
	I1101 11:19:21.371426  119309 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1101 11:19:21.371435  119309 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 11:19:21.371468  119309 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 11:19:21.372639  119309 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 11:19:21.372689  119309 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 11:19:21.372698  119309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 11:19:21.372674  119309 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 11:19:21.372902  119309 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 11:19:21.372918  119309 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 11:19:21.373807  119309 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 11:19:21.375329  119309 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 11:19:21.375353  119309 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 11:19:21.377130  119309 main.go:143] libmachine: domain newest-cni-268638 has defined MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:21.377302  119309 main.go:143] libmachine: domain newest-cni-268638 has defined MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:21.377598  119309 main.go:143] libmachine: domain newest-cni-268638 has defined MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:21.378149  119309 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:62:b8:3b", ip: ""} in network mk-newest-cni-268638: {Iface:virbr5 ExpiryTime:2025-11-01 12:18:59 +0000 UTC Type:0 Mac:52:54:00:62:b8:3b Iaid: IPaddr:192.168.83.241 Prefix:24 Hostname:newest-cni-268638 Clientid:01:52:54:00:62:b8:3b}
	I1101 11:19:21.378182  119309 main.go:143] libmachine: domain newest-cni-268638 has defined IP address 192.168.83.241 and MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:21.378338  119309 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:62:b8:3b", ip: ""} in network mk-newest-cni-268638: {Iface:virbr5 ExpiryTime:2025-11-01 12:18:59 +0000 UTC Type:0 Mac:52:54:00:62:b8:3b Iaid: IPaddr:192.168.83.241 Prefix:24 Hostname:newest-cni-268638 Clientid:01:52:54:00:62:b8:3b}
	I1101 11:19:21.378371  119309 main.go:143] libmachine: domain newest-cni-268638 has defined IP address 192.168.83.241 and MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:21.378455  119309 sshutil.go:53] new ssh client: &{IP:192.168.83.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/newest-cni-268638/id_rsa Username:docker}
	I1101 11:19:21.378763  119309 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:62:b8:3b", ip: ""} in network mk-newest-cni-268638: {Iface:virbr5 ExpiryTime:2025-11-01 12:18:59 +0000 UTC Type:0 Mac:52:54:00:62:b8:3b Iaid: IPaddr:192.168.83.241 Prefix:24 Hostname:newest-cni-268638 Clientid:01:52:54:00:62:b8:3b}
	I1101 11:19:21.378804  119309 main.go:143] libmachine: domain newest-cni-268638 has defined IP address 192.168.83.241 and MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:21.378796  119309 sshutil.go:53] new ssh client: &{IP:192.168.83.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/newest-cni-268638/id_rsa Username:docker}
	I1101 11:19:21.379105  119309 sshutil.go:53] new ssh client: &{IP:192.168.83.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/newest-cni-268638/id_rsa Username:docker}
	I1101 11:19:21.380133  119309 main.go:143] libmachine: domain newest-cni-268638 has defined MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:21.380622  119309 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:62:b8:3b", ip: ""} in network mk-newest-cni-268638: {Iface:virbr5 ExpiryTime:2025-11-01 12:18:59 +0000 UTC Type:0 Mac:52:54:00:62:b8:3b Iaid: IPaddr:192.168.83.241 Prefix:24 Hostname:newest-cni-268638 Clientid:01:52:54:00:62:b8:3b}
	I1101 11:19:21.380658  119309 main.go:143] libmachine: domain newest-cni-268638 has defined IP address 192.168.83.241 and MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:21.380925  119309 sshutil.go:53] new ssh client: &{IP:192.168.83.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/newest-cni-268638/id_rsa Username:docker}
	I1101 11:19:21.659867  119309 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:19:21.685760  119309 api_server.go:52] waiting for apiserver process to appear ...
	I1101 11:19:21.685860  119309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:19:21.729512  119309 api_server.go:72] duration metric: took 363.407154ms to wait for apiserver process to appear ...
	I1101 11:19:21.729556  119309 api_server.go:88] waiting for apiserver healthz status ...
	I1101 11:19:21.729579  119309 api_server.go:253] Checking apiserver healthz at https://192.168.83.241:8443/healthz ...
	I1101 11:19:21.748315  119309 api_server.go:279] https://192.168.83.241:8443/healthz returned 200:
	ok
	I1101 11:19:21.749440  119309 api_server.go:141] control plane version: v1.34.1
	I1101 11:19:21.749466  119309 api_server.go:131] duration metric: took 19.901219ms to wait for apiserver health ...
	I1101 11:19:21.749475  119309 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 11:19:21.757166  119309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 11:19:21.761479  119309 system_pods.go:59] 8 kube-system pods found
	I1101 11:19:21.761520  119309 system_pods.go:61] "coredns-66bc5c9577-x5nfd" [acc63001-4d92-4ca1-ac5d-7a0e2c4a25a8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 11:19:21.761544  119309 system_pods.go:61] "etcd-newest-cni-268638" [b62e4b95-ef59-4654-9898-ba8e0fff3055] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 11:19:21.761559  119309 system_pods.go:61] "kube-apiserver-newest-cni-268638" [0bbccadf-5e63-497e-99de-df7df8aaf3d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 11:19:21.761570  119309 system_pods.go:61] "kube-controller-manager-newest-cni-268638" [ee7753cb-2e9a-4a99-bad6-c6f735170567] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 11:19:21.761578  119309 system_pods.go:61] "kube-proxy-p5ldr" [04b69050-8b02-418a-9872-92d2559f8b82] Running
	I1101 11:19:21.761591  119309 system_pods.go:61] "kube-scheduler-newest-cni-268638" [b1ebd08f-c186-4b93-8642-5f9acc2eef2c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 11:19:21.761606  119309 system_pods.go:61] "metrics-server-746fcd58dc-mv8ln" [59cbbd00-75e5-4542-97f4-c810a5533e4f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 11:19:21.761616  119309 system_pods.go:61] "storage-provisioner" [451e647d-8388-4461-a4a9-09b930bc3f87] Running
	I1101 11:19:21.761625  119309 system_pods.go:74] duration metric: took 12.142599ms to wait for pod list to return data ...
	I1101 11:19:21.761639  119309 default_sa.go:34] waiting for default service account to be created ...
	I1101 11:19:21.770025  119309 default_sa.go:45] found service account: "default"
	I1101 11:19:21.770061  119309 default_sa.go:55] duration metric: took 8.413855ms for default service account to be created ...
	I1101 11:19:21.770078  119309 kubeadm.go:587] duration metric: took 403.980934ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 11:19:21.770099  119309 node_conditions.go:102] verifying NodePressure condition ...
	I1101 11:19:21.775874  119309 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 11:19:21.775904  119309 node_conditions.go:123] node cpu capacity is 2
	I1101 11:19:21.775922  119309 node_conditions.go:105] duration metric: took 5.815749ms to run NodePressure ...
	I1101 11:19:21.775938  119309 start.go:242] waiting for startup goroutines ...
	I1101 11:19:21.842846  119309 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 11:19:21.842874  119309 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 11:19:21.844254  119309 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 11:19:21.844279  119309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1101 11:19:21.849369  119309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 11:19:21.929460  119309 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 11:19:21.929491  119309 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 11:19:21.950803  119309 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 11:19:21.950840  119309 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 11:19:22.007473  119309 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 11:19:22.007504  119309 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 11:19:22.080939  119309 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 11:19:22.080965  119309 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 11:19:22.099491  119309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 11:19:22.201305  119309 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 11:19:22.201329  119309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 11:19:22.292063  119309 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 11:19:22.292100  119309 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 11:19:22.323098  119309 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 11:19:22.323129  119309 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 11:19:22.355259  119309 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 11:19:22.355296  119309 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 11:19:22.439275  119309 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 11:19:22.439303  119309 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 11:19:22.483728  119309 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 11:19:22.483771  119309 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 11:19:22.540647  119309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 11:19:23.661781  119309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.904570562s)
	I1101 11:19:23.661937  119309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.812533267s)
	I1101 11:19:23.760490  119309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.660956623s)
	I1101 11:19:23.760542  119309 addons.go:480] Verifying addon metrics-server=true in "newest-cni-268638"
	I1101 11:19:23.970058  119309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.429352819s)
	I1101 11:19:23.971645  119309 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-268638 addons enable metrics-server
	
	I1101 11:19:23.973116  119309 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1101 11:19:23.975292  119309 addons.go:515] duration metric: took 2.60913809s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1101 11:19:23.975347  119309 start.go:247] waiting for cluster config update ...
	I1101 11:19:23.975366  119309 start.go:256] writing updated cluster config ...
	I1101 11:19:23.975782  119309 ssh_runner.go:195] Run: rm -f paused
	I1101 11:19:24.029738  119309 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 11:19:24.031224  119309 out.go:179] * Done! kubectl is now configured to use "newest-cni-268638" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 11:28:15 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:28:15.472297282Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761996495472274613,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dc2c076d-1f2d-4df0-a384-9d4e04790dc2 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 11:28:15 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:28:15.472821463Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5ed064af-1e33-48de-bd49-26e9e171b51b name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:28:15 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:28:15.472928413Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5ed064af-1e33-48de-bd49-26e9e171b51b name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:28:15 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:28:15.473127024Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cd33391b92cb08d4df0d63830b1771aa68f2b232872247b30678527a8de4b2d5,PodSandboxId:19b5e46208120cfe549f0a75de8c3300847e0a996af8339a68d56fdc83499c4b,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1761996308024369207,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-7pccn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7f271eef-bded-49cd-b5a1-a618ebebcfcb,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:838afc0402cc44a7baca900d97efe9a53459c9a5fa48b14c8b5b7ee572673b34,PodSandboxId:f89bd170e3ab791be75c6d09b416e71992b7eea519dee363b0aea22c7bd2ed15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761995971570999757,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a047ac3-d0c4-448e-8066-5a3ccd78fcc1,},Annota
tions:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:848b80bee9de3b138a0e88b1a8f450d5c67ce43c1ecbbaa7aba66e87723fef76,PodSandboxId:5971a0ea778afc3d3b18aa4570fef30841b519925d07dcb801608158527aa33f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761995953413287885,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f85d60f-859a-4d40-83a1-8565332c1575,},Annotations:map[string]
string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04cd8b8612923325cb76652a74ef98a220acd8f7792ae9b6958de29b4f8cd712,PodSandboxId:7814f2d98f1d8d387a6e03b78381837b77ced36225ca497d63878951f14b8e52,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761995948028974803,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-drlhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fe001ab-c59d-4a12-9897-d7d2869a1af8,},Annotations:map[string]string{io.kubernetes.container.
hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e758b5eb3e8f723ce0d318b843d3147f85b6309e541bd51c85df5d1849e4490,PodSandboxId:ab68856a906dcfb0191ae4f4213a70de8a113156a0973a84df75b4ff2523aa69,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba6833007935
5e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761995940518207761,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lhjdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63b7c2eb-cdb2-4318-bef4-e95e3e478fb6,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2444a95999378aeda01a073bdc97ff16fb844a2080b86a21c1bffecb72fdd394,PodSandboxId:f89bd170e3ab791be75c6d09b416e71992b7eea519dee363b0aea22c7bd2ed15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,S
tate:CONTAINER_EXITED,CreatedAt:1761995940619379524,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a047ac3-d0c4-448e-8066-5a3ccd78fcc1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c18dcdf4c1e01b3ce82caaa2e01dcefe0853e396a33e4853e2943527579d9eba,PodSandboxId:55ed5cb514feedcb1b87714c12893e6c5a251b0eab38b0344c65fa6c32c50eed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,Creat
edAt:1761995934946012476,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-287419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d406d357930944106b6d791e1ab75f69,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2abd77441a1170ed4708a5a0dd563e79e0e5dc1e6203d71b175f2377e559dca2,PodSandboxId:284f6d643a797c1c35cd36ffba29d94d54f2afff14962e0474fc181d7f91cca0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761995934921401989,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-287419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a72692cc5e571176344fbccc16480bc9,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1dd634ce58953495186556de934f647c8cf41ade9027121ff41b5179263adfa,PodSandboxId:cc82fa51754fbc7c62bc351909c0f05ae870c91f48b701a1f3fdd01f376da5f4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917e
c0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761995934889787053,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-287419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 122d200fdb47f53d891405a2f21d715c,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8444,\"containerPort\":8444,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44e7eb14b2041650d2ce9e00e64b37f7fefc47da35b3c38bc68983dcd628e8c9,PodSandboxId:d492cdf741ca13e59796c3988730a1e7ef489e51d033a0c
e715392fe349cb57c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761995934871234089,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-287419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 605a85dc1a362c49d35893be2a427c1e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod
: 30,},},},}" file="otel-collector/interceptors.go:74" id=5ed064af-1e33-48de-bd49-26e9e171b51b name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:28:15 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:28:15.515959401Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e1b5602f-429c-4b17-b99c-c673b72b4760 name=/runtime.v1.RuntimeService/Version
	Nov 01 11:28:15 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:28:15.516029788Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e1b5602f-429c-4b17-b99c-c673b72b4760 name=/runtime.v1.RuntimeService/Version
	Nov 01 11:28:15 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:28:15.517151709Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aed006a3-8c21-43a1-ada5-21301fec1c94 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 11:28:15 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:28:15.517632103Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761996495517608925,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aed006a3-8c21-43a1-ada5-21301fec1c94 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 11:28:15 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:28:15.518257523Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3df61925-569b-4d90-885f-27a4baee1d29 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:28:15 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:28:15.518528657Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3df61925-569b-4d90-885f-27a4baee1d29 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:28:15 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:28:15.519204609Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cd33391b92cb08d4df0d63830b1771aa68f2b232872247b30678527a8de4b2d5,PodSandboxId:19b5e46208120cfe549f0a75de8c3300847e0a996af8339a68d56fdc83499c4b,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1761996308024369207,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-7pccn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7f271eef-bded-49cd-b5a1-a618ebebcfcb,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:838afc0402cc44a7baca900d97efe9a53459c9a5fa48b14c8b5b7ee572673b34,PodSandboxId:f89bd170e3ab791be75c6d09b416e71992b7eea519dee363b0aea22c7bd2ed15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761995971570999757,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a047ac3-d0c4-448e-8066-5a3ccd78fcc1,},Annota
tions:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:848b80bee9de3b138a0e88b1a8f450d5c67ce43c1ecbbaa7aba66e87723fef76,PodSandboxId:5971a0ea778afc3d3b18aa4570fef30841b519925d07dcb801608158527aa33f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761995953413287885,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f85d60f-859a-4d40-83a1-8565332c1575,},Annotations:map[string]
string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04cd8b8612923325cb76652a74ef98a220acd8f7792ae9b6958de29b4f8cd712,PodSandboxId:7814f2d98f1d8d387a6e03b78381837b77ced36225ca497d63878951f14b8e52,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761995948028974803,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-drlhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fe001ab-c59d-4a12-9897-d7d2869a1af8,},Annotations:map[string]string{io.kubernetes.container.
hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e758b5eb3e8f723ce0d318b843d3147f85b6309e541bd51c85df5d1849e4490,PodSandboxId:ab68856a906dcfb0191ae4f4213a70de8a113156a0973a84df75b4ff2523aa69,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba6833007935
5e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761995940518207761,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lhjdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63b7c2eb-cdb2-4318-bef4-e95e3e478fb6,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2444a95999378aeda01a073bdc97ff16fb844a2080b86a21c1bffecb72fdd394,PodSandboxId:f89bd170e3ab791be75c6d09b416e71992b7eea519dee363b0aea22c7bd2ed15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,S
tate:CONTAINER_EXITED,CreatedAt:1761995940619379524,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a047ac3-d0c4-448e-8066-5a3ccd78fcc1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c18dcdf4c1e01b3ce82caaa2e01dcefe0853e396a33e4853e2943527579d9eba,PodSandboxId:55ed5cb514feedcb1b87714c12893e6c5a251b0eab38b0344c65fa6c32c50eed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,Creat
edAt:1761995934946012476,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-287419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d406d357930944106b6d791e1ab75f69,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2abd77441a1170ed4708a5a0dd563e79e0e5dc1e6203d71b175f2377e559dca2,PodSandboxId:284f6d643a797c1c35cd36ffba29d94d54f2afff14962e0474fc181d7f91cca0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761995934921401989,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-287419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a72692cc5e571176344fbccc16480bc9,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1dd634ce58953495186556de934f647c8cf41ade9027121ff41b5179263adfa,PodSandboxId:cc82fa51754fbc7c62bc351909c0f05ae870c91f48b701a1f3fdd01f376da5f4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917e
c0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761995934889787053,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-287419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 122d200fdb47f53d891405a2f21d715c,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8444,\"containerPort\":8444,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44e7eb14b2041650d2ce9e00e64b37f7fefc47da35b3c38bc68983dcd628e8c9,PodSandboxId:d492cdf741ca13e59796c3988730a1e7ef489e51d033a0c
e715392fe349cb57c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761995934871234089,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-287419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 605a85dc1a362c49d35893be2a427c1e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod
: 30,},},},}" file="otel-collector/interceptors.go:74" id=3df61925-569b-4d90-885f-27a4baee1d29 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:28:15 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:28:15.558016308Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0675528c-6d43-4bb3-adc5-fc54c88f713d name=/runtime.v1.RuntimeService/Version
	Nov 01 11:28:15 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:28:15.558390382Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0675528c-6d43-4bb3-adc5-fc54c88f713d name=/runtime.v1.RuntimeService/Version
	Nov 01 11:28:15 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:28:15.560977408Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=14948ab9-4f58-4abd-8d22-253f7c731557 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 11:28:15 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:28:15.561444712Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761996495561423104,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=14948ab9-4f58-4abd-8d22-253f7c731557 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 11:28:15 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:28:15.562175782Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ea9e39d7-609d-492e-b63d-d1464c48eeb0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:28:15 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:28:15.562266158Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ea9e39d7-609d-492e-b63d-d1464c48eeb0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:28:15 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:28:15.562481910Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cd33391b92cb08d4df0d63830b1771aa68f2b232872247b30678527a8de4b2d5,PodSandboxId:19b5e46208120cfe549f0a75de8c3300847e0a996af8339a68d56fdc83499c4b,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1761996308024369207,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-7pccn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7f271eef-bded-49cd-b5a1-a618ebebcfcb,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:838afc0402cc44a7baca900d97efe9a53459c9a5fa48b14c8b5b7ee572673b34,PodSandboxId:f89bd170e3ab791be75c6d09b416e71992b7eea519dee363b0aea22c7bd2ed15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761995971570999757,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a047ac3-d0c4-448e-8066-5a3ccd78fcc1,},Annota
tions:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:848b80bee9de3b138a0e88b1a8f450d5c67ce43c1ecbbaa7aba66e87723fef76,PodSandboxId:5971a0ea778afc3d3b18aa4570fef30841b519925d07dcb801608158527aa33f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761995953413287885,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f85d60f-859a-4d40-83a1-8565332c1575,},Annotations:map[string]
string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04cd8b8612923325cb76652a74ef98a220acd8f7792ae9b6958de29b4f8cd712,PodSandboxId:7814f2d98f1d8d387a6e03b78381837b77ced36225ca497d63878951f14b8e52,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761995948028974803,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-drlhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fe001ab-c59d-4a12-9897-d7d2869a1af8,},Annotations:map[string]string{io.kubernetes.container.
hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e758b5eb3e8f723ce0d318b843d3147f85b6309e541bd51c85df5d1849e4490,PodSandboxId:ab68856a906dcfb0191ae4f4213a70de8a113156a0973a84df75b4ff2523aa69,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba6833007935
5e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761995940518207761,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lhjdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63b7c2eb-cdb2-4318-bef4-e95e3e478fb6,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2444a95999378aeda01a073bdc97ff16fb844a2080b86a21c1bffecb72fdd394,PodSandboxId:f89bd170e3ab791be75c6d09b416e71992b7eea519dee363b0aea22c7bd2ed15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,S
tate:CONTAINER_EXITED,CreatedAt:1761995940619379524,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a047ac3-d0c4-448e-8066-5a3ccd78fcc1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c18dcdf4c1e01b3ce82caaa2e01dcefe0853e396a33e4853e2943527579d9eba,PodSandboxId:55ed5cb514feedcb1b87714c12893e6c5a251b0eab38b0344c65fa6c32c50eed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,Creat
edAt:1761995934946012476,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-287419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d406d357930944106b6d791e1ab75f69,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2abd77441a1170ed4708a5a0dd563e79e0e5dc1e6203d71b175f2377e559dca2,PodSandboxId:284f6d643a797c1c35cd36ffba29d94d54f2afff14962e0474fc181d7f91cca0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761995934921401989,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-287419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a72692cc5e571176344fbccc16480bc9,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1dd634ce58953495186556de934f647c8cf41ade9027121ff41b5179263adfa,PodSandboxId:cc82fa51754fbc7c62bc351909c0f05ae870c91f48b701a1f3fdd01f376da5f4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917e
c0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761995934889787053,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-287419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 122d200fdb47f53d891405a2f21d715c,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8444,\"containerPort\":8444,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44e7eb14b2041650d2ce9e00e64b37f7fefc47da35b3c38bc68983dcd628e8c9,PodSandboxId:d492cdf741ca13e59796c3988730a1e7ef489e51d033a0c
e715392fe349cb57c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761995934871234089,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-287419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 605a85dc1a362c49d35893be2a427c1e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod
: 30,},},},}" file="otel-collector/interceptors.go:74" id=ea9e39d7-609d-492e-b63d-d1464c48eeb0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:28:15 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:28:15.599800367Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7447ddb0-a04a-4e8f-8bba-330a8546dcbe name=/runtime.v1.RuntimeService/Version
	Nov 01 11:28:15 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:28:15.599956198Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7447ddb0-a04a-4e8f-8bba-330a8546dcbe name=/runtime.v1.RuntimeService/Version
	Nov 01 11:28:15 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:28:15.601182619Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c1851db6-ca36-4c8c-876b-2788c35e68a6 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 11:28:15 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:28:15.601688782Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761996495601668773,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c1851db6-ca36-4c8c-876b-2788c35e68a6 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 11:28:15 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:28:15.602191204Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ae6b78b6-8ec5-4019-8b34-f43d0a4b9f11 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:28:15 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:28:15.602242696Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ae6b78b6-8ec5-4019-8b34-f43d0a4b9f11 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:28:15 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:28:15.602444759Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:cd33391b92cb08d4df0d63830b1771aa68f2b232872247b30678527a8de4b2d5,PodSandboxId:19b5e46208120cfe549f0a75de8c3300847e0a996af8339a68d56fdc83499c4b,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:6,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1761996308024369207,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-7pccn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7f271eef-bded-49cd-b5a1-a618ebebcfcb,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:838afc0402cc44a7baca900d97efe9a53459c9a5fa48b14c8b5b7ee572673b34,PodSandboxId:f89bd170e3ab791be75c6d09b416e71992b7eea519dee363b0aea22c7bd2ed15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761995971570999757,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a047ac3-d0c4-448e-8066-5a3ccd78fcc1,},Annota
tions:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:848b80bee9de3b138a0e88b1a8f450d5c67ce43c1ecbbaa7aba66e87723fef76,PodSandboxId:5971a0ea778afc3d3b18aa4570fef30841b519925d07dcb801608158527aa33f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761995953413287885,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f85d60f-859a-4d40-83a1-8565332c1575,},Annotations:map[string]
string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04cd8b8612923325cb76652a74ef98a220acd8f7792ae9b6958de29b4f8cd712,PodSandboxId:7814f2d98f1d8d387a6e03b78381837b77ced36225ca497d63878951f14b8e52,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761995948028974803,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-drlhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fe001ab-c59d-4a12-9897-d7d2869a1af8,},Annotations:map[string]string{io.kubernetes.container.
hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e758b5eb3e8f723ce0d318b843d3147f85b6309e541bd51c85df5d1849e4490,PodSandboxId:ab68856a906dcfb0191ae4f4213a70de8a113156a0973a84df75b4ff2523aa69,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba6833007935
5e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761995940518207761,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lhjdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63b7c2eb-cdb2-4318-bef4-e95e3e478fb6,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2444a95999378aeda01a073bdc97ff16fb844a2080b86a21c1bffecb72fdd394,PodSandboxId:f89bd170e3ab791be75c6d09b416e71992b7eea519dee363b0aea22c7bd2ed15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,S
tate:CONTAINER_EXITED,CreatedAt:1761995940619379524,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a047ac3-d0c4-448e-8066-5a3ccd78fcc1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c18dcdf4c1e01b3ce82caaa2e01dcefe0853e396a33e4853e2943527579d9eba,PodSandboxId:55ed5cb514feedcb1b87714c12893e6c5a251b0eab38b0344c65fa6c32c50eed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,Creat
edAt:1761995934946012476,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-287419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d406d357930944106b6d791e1ab75f69,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2abd77441a1170ed4708a5a0dd563e79e0e5dc1e6203d71b175f2377e559dca2,PodSandboxId:284f6d643a797c1c35cd36ffba29d94d54f2afff14962e0474fc181d7f91cca0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761995934921401989,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-287419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a72692cc5e571176344fbccc16480bc9,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1dd634ce58953495186556de934f647c8cf41ade9027121ff41b5179263adfa,PodSandboxId:cc82fa51754fbc7c62bc351909c0f05ae870c91f48b701a1f3fdd01f376da5f4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917e
c0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761995934889787053,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-287419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 122d200fdb47f53d891405a2f21d715c,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8444,\"containerPort\":8444,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44e7eb14b2041650d2ce9e00e64b37f7fefc47da35b3c38bc68983dcd628e8c9,PodSandboxId:d492cdf741ca13e59796c3988730a1e7ef489e51d033a0c
e715392fe349cb57c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761995934871234089,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-287419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 605a85dc1a362c49d35893be2a427c1e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod
: 30,},},},}" file="otel-collector/interceptors.go:74" id=ae6b78b6-8ec5-4019-8b34-f43d0a4b9f11 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	cd33391b92cb0       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                      3 minutes ago       Exited              dashboard-metrics-scraper   6                   19b5e46208120       dashboard-metrics-scraper-6ffb444bf9-7pccn
	838afc0402cc4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago       Running             storage-provisioner         2                   f89bd170e3ab7       storage-provisioner
	848b80bee9de3       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   9 minutes ago       Running             busybox                     1                   5971a0ea778af       busybox
	04cd8b8612923       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      9 minutes ago       Running             coredns                     1                   7814f2d98f1d8       coredns-66bc5c9577-drlhc
	2444a95999378       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner         1                   f89bd170e3ab7       storage-provisioner
	1e758b5eb3e8f       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      9 minutes ago       Running             kube-proxy                  1                   ab68856a906dc       kube-proxy-lhjdx
	c18dcdf4c1e01       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      9 minutes ago       Running             etcd                        1                   55ed5cb514fee       etcd-default-k8s-diff-port-287419
	2abd77441a117       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      9 minutes ago       Running             kube-scheduler              1                   284f6d643a797       kube-scheduler-default-k8s-diff-port-287419
	e1dd634ce5895       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      9 minutes ago       Running             kube-apiserver              1                   cc82fa51754fb       kube-apiserver-default-k8s-diff-port-287419
	44e7eb14b2041       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      9 minutes ago       Running             kube-controller-manager     1                   d492cdf741ca1       kube-controller-manager-default-k8s-diff-port-287419
	
	
	==> coredns [04cd8b8612923325cb76652a74ef98a220acd8f7792ae9b6958de29b4f8cd712] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48196 - 3921 "HINFO IN 3775580877941796997.6007613078252661342. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.092288418s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-287419
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-287419
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=default-k8s-diff-port-287419
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T11_15_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 11:15:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-287419
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 11:28:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 11:24:26 +0000   Sat, 01 Nov 2025 11:15:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 11:24:26 +0000   Sat, 01 Nov 2025 11:15:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 11:24:26 +0000   Sat, 01 Nov 2025 11:15:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 11:24:26 +0000   Sat, 01 Nov 2025 11:19:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.189
	  Hostname:    default-k8s-diff-port-287419
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 ca9e8ff862574318bf222e13a7f3b00b
	  System UUID:                ca9e8ff8-6257-4318-bf22-2e13a7f3b00b
	  Boot ID:                    7b59af70-09e5-4e21-ac3c-3c1ffa10b358
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-drlhc                                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     12m
	  kube-system                 etcd-default-k8s-diff-port-287419                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         12m
	  kube-system                 kube-apiserver-default-k8s-diff-port-287419             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-287419    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-lhjdx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-default-k8s-diff-port-287419             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-746fcd58dc-zmbnr                         100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         11m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-7pccn              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m10s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-jt94t                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (12%)  170Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 9m14s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  12m                    kubelet          Node default-k8s-diff-port-287419 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    12m                    kubelet          Node default-k8s-diff-port-287419 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                    kubelet          Node default-k8s-diff-port-287419 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                    kubelet          Starting kubelet.
	  Normal   NodeReady                12m                    kubelet          Node default-k8s-diff-port-287419 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node default-k8s-diff-port-287419 event: Registered Node default-k8s-diff-port-287419 in Controller
	  Normal   Starting                 9m22s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9m21s (x8 over 9m22s)  kubelet          Node default-k8s-diff-port-287419 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m21s (x8 over 9m22s)  kubelet          Node default-k8s-diff-port-287419 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m21s (x7 over 9m22s)  kubelet          Node default-k8s-diff-port-287419 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 9m16s                  kubelet          Node default-k8s-diff-port-287419 has been rebooted, boot id: 7b59af70-09e5-4e21-ac3c-3c1ffa10b358
	  Normal   RegisteredNode           9m12s                  node-controller  Node default-k8s-diff-port-287419 event: Registered Node default-k8s-diff-port-287419 in Controller
	
	
	==> dmesg <==
	[Nov 1 11:18] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000691] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.004376] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.734714] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000021] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.105593] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.143469] kauditd_printk_skb: 74 callbacks suppressed
	[Nov 1 11:19] kauditd_printk_skb: 196 callbacks suppressed
	[  +1.335384] kauditd_printk_skb: 176 callbacks suppressed
	[  +0.090298] kauditd_printk_skb: 141 callbacks suppressed
	[  +6.902046] kauditd_printk_skb: 38 callbacks suppressed
	[ +14.036700] kauditd_printk_skb: 49 callbacks suppressed
	[  +5.439376] kauditd_printk_skb: 5 callbacks suppressed
	[ +12.634980] kauditd_printk_skb: 6 callbacks suppressed
	[Nov 1 11:20] kauditd_printk_skb: 6 callbacks suppressed
	[Nov 1 11:22] kauditd_printk_skb: 6 callbacks suppressed
	[Nov 1 11:25] kauditd_printk_skb: 6 callbacks suppressed
	
	
	==> etcd [c18dcdf4c1e01b3ce82caaa2e01dcefe0853e396a33e4853e2943527579d9eba] <==
	{"level":"warn","ts":"2025-11-01T11:18:58.247989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:18:58.260505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:18:58.283691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:18:58.297577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T11:18:58.393972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39696","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T11:19:04.865548Z","caller":"traceutil/trace.go:172","msg":"trace[1576424671] linearizableReadLoop","detail":"{readStateIndex:628; appliedIndex:628; }","duration":"106.843747ms","start":"2025-11-01T11:19:04.758683Z","end":"2025-11-01T11:19:04.865526Z","steps":["trace[1576424671] 'read index received'  (duration: 106.837055ms)","trace[1576424671] 'applied index is now lower than readState.Index'  (duration: 5.71µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T11:19:04.865712Z","caller":"traceutil/trace.go:172","msg":"trace[118970669] transaction","detail":"{read_only:false; response_revision:582; number_of_response:1; }","duration":"131.155334ms","start":"2025-11-01T11:19:04.734546Z","end":"2025-11-01T11:19:04.865701Z","steps":["trace[118970669] 'process raft request'  (duration: 131.061556ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T11:19:04.865787Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"107.053475ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/metrics-server\" limit:1 ","response":"range_response_count:1 size:5183"}
	{"level":"info","ts":"2025-11-01T11:19:04.865946Z","caller":"traceutil/trace.go:172","msg":"trace[1734560141] range","detail":"{range_begin:/registry/deployments/kube-system/metrics-server; range_end:; response_count:1; response_revision:581; }","duration":"107.258666ms","start":"2025-11-01T11:19:04.758677Z","end":"2025-11-01T11:19:04.865936Z","steps":["trace[1734560141] 'agreement among raft nodes before linearized reading'  (duration: 106.959161ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T11:19:04.866197Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.92486ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/storage-provisioner\" limit:1 ","response":"range_response_count:1 size:1145"}
	{"level":"info","ts":"2025-11-01T11:19:04.866217Z","caller":"traceutil/trace.go:172","msg":"trace[722761142] range","detail":"{range_begin:/registry/clusterrolebindings/storage-provisioner; range_end:; response_count:1; response_revision:582; }","duration":"100.950342ms","start":"2025-11-01T11:19:04.765262Z","end":"2025-11-01T11:19:04.866212Z","steps":["trace[722761142] 'agreement among raft nodes before linearized reading'  (duration: 100.836427ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T11:19:11.359989Z","caller":"traceutil/trace.go:172","msg":"trace[1737303021] transaction","detail":"{read_only:false; response_revision:679; number_of_response:1; }","duration":"120.872536ms","start":"2025-11-01T11:19:11.239091Z","end":"2025-11-01T11:19:11.359963Z","steps":["trace[1737303021] 'process raft request'  (duration: 56.105618ms)","trace[1737303021] 'compare'  (duration: 64.415753ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T11:19:12.426252Z","caller":"traceutil/trace.go:172","msg":"trace[624116074] transaction","detail":"{read_only:false; response_revision:680; number_of_response:1; }","duration":"278.021397ms","start":"2025-11-01T11:19:12.148216Z","end":"2025-11-01T11:19:12.426237Z","steps":["trace[624116074] 'process raft request'  (duration: 277.900881ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T11:19:12.843999Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"353.818914ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1431188377579781163 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-287419\" mod_revision:680 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-287419\" value_size:7070 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-287419\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-01T11:19:12.846257Z","caller":"traceutil/trace.go:172","msg":"trace[627877480] transaction","detail":"{read_only:false; response_revision:681; number_of_response:1; }","duration":"399.04676ms","start":"2025-11-01T11:19:12.447194Z","end":"2025-11-01T11:19:12.846241Z","steps":["trace[627877480] 'process raft request'  (duration: 42.121384ms)","trace[627877480] 'compare'  (duration: 353.681691ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T11:19:12.846383Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T11:19:12.447175Z","time spent":"399.158958ms","remote":"127.0.0.1:38944","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":7148,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-287419\" mod_revision:680 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-287419\" value_size:7070 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-287419\" > >"}
	{"level":"warn","ts":"2025-11-01T11:19:13.277709Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"433.614055ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1431188377579781164 > lease_revoke:<id:13dc9a3f215ccc33>","response":"size:27"}
	{"level":"info","ts":"2025-11-01T11:19:13.277788Z","caller":"traceutil/trace.go:172","msg":"trace[1986476020] linearizableReadLoop","detail":"{readStateIndex:730; appliedIndex:729; }","duration":"427.999836ms","start":"2025-11-01T11:19:12.849776Z","end":"2025-11-01T11:19:13.277776Z","steps":["trace[1986476020] 'read index received'  (duration: 115.359685ms)","trace[1986476020] 'applied index is now lower than readState.Index'  (duration: 312.639454ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T11:19:13.278051Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"217.410264ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T11:19:13.278092Z","caller":"traceutil/trace.go:172","msg":"trace[2128589413] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:681; }","duration":"217.460761ms","start":"2025-11-01T11:19:13.060621Z","end":"2025-11-01T11:19:13.278082Z","steps":["trace[2128589413] 'range keys from in-memory index tree'  (duration: 217.380452ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T11:19:13.278235Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"428.447882ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-287419\" limit:1 ","response":"range_response_count:1 size:7163"}
	{"level":"info","ts":"2025-11-01T11:19:13.278266Z","caller":"traceutil/trace.go:172","msg":"trace[604866966] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-287419; range_end:; response_count:1; response_revision:681; }","duration":"428.485242ms","start":"2025-11-01T11:19:12.849772Z","end":"2025-11-01T11:19:13.278257Z","steps":["trace[604866966] 'agreement among raft nodes before linearized reading'  (duration: 428.369273ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T11:19:13.278291Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T11:19:12.849758Z","time spent":"428.525255ms","remote":"127.0.0.1:38944","response type":"/etcdserverpb.KV/Range","request count":0,"request size":74,"response count":1,"response size":7185,"request content":"key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-287419\" limit:1 "}
	{"level":"warn","ts":"2025-11-01T11:19:13.282012Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"136.43253ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T11:19:13.282177Z","caller":"traceutil/trace.go:172","msg":"trace[1391153181] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:681; }","duration":"136.598631ms","start":"2025-11-01T11:19:13.145568Z","end":"2025-11-01T11:19:13.282167Z","steps":["trace[1391153181] 'agreement among raft nodes before linearized reading'  (duration: 136.385737ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:28:15 up 9 min,  0 users,  load average: 0.32, 0.31, 0.21
	Linux default-k8s-diff-port-287419 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Oct 28 16:58:05 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [e1dd634ce58953495186556de934f647c8cf41ade9027121ff41b5179263adfa] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1101 11:24:00.554244       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1101 11:24:00.554457       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 11:24:00.554533       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1101 11:24:00.556193       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1101 11:25:00.555482       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 11:25:00.555565       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1101 11:25:00.555598       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1101 11:25:00.556904       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 11:25:00.556952       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1101 11:25:00.556964       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1101 11:27:00.556365       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 11:27:00.556499       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1101 11:27:00.556514       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1101 11:27:00.557695       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 11:27:00.557746       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1101 11:27:00.557758       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [44e7eb14b2041650d2ce9e00e64b37f7fefc47da35b3c38bc68983dcd628e8c9] <==
	I1101 11:22:03.323509       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1101 11:22:33.214586       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 11:22:33.332253       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1101 11:23:03.221368       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 11:23:03.343390       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1101 11:23:33.227561       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 11:23:33.351363       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1101 11:24:03.233791       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 11:24:03.360971       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1101 11:24:33.239421       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 11:24:33.368314       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1101 11:25:03.245501       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 11:25:03.378281       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1101 11:25:33.250512       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 11:25:33.387467       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1101 11:26:03.255727       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 11:26:03.396098       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1101 11:26:33.260439       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 11:26:33.404626       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1101 11:27:03.270634       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 11:27:03.412768       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1101 11:27:33.276569       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 11:27:33.421716       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1101 11:28:03.281522       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 11:28:03.429531       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [1e758b5eb3e8f723ce0d318b843d3147f85b6309e541bd51c85df5d1849e4490] <==
	I1101 11:19:01.268725       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 11:19:01.369623       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 11:19:01.369670       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.72.189"]
	E1101 11:19:01.369774       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 11:19:01.428199       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1101 11:19:01.428305       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 11:19:01.428351       1 server_linux.go:132] "Using iptables Proxier"
	I1101 11:19:01.441808       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 11:19:01.442339       1 server.go:527] "Version info" version="v1.34.1"
	I1101 11:19:01.442383       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 11:19:01.448600       1 config.go:200] "Starting service config controller"
	I1101 11:19:01.448678       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 11:19:01.448724       1 config.go:106] "Starting endpoint slice config controller"
	I1101 11:19:01.448748       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 11:19:01.448773       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 11:19:01.448787       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 11:19:01.449736       1 config.go:309] "Starting node config controller"
	I1101 11:19:01.449803       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 11:19:01.449915       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 11:19:01.552195       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 11:19:01.552230       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 11:19:01.552280       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2abd77441a1170ed4708a5a0dd563e79e0e5dc1e6203d71b175f2377e559dca2] <==
	I1101 11:18:57.948568       1 serving.go:386] Generated self-signed cert in-memory
	W1101 11:18:59.485639       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 11:18:59.485697       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 11:18:59.485712       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 11:18:59.485723       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 11:18:59.575013       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 11:18:59.575215       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 11:18:59.578747       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 11:18:59.578928       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 11:18:59.578942       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 11:18:59.578962       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 11:18:59.680094       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 11:27:34 default-k8s-diff-port-287419 kubelet[1225]: E1101 11:27:34.142015    1225 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761996454139077943  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Nov 01 11:27:35 default-k8s-diff-port-287419 kubelet[1225]: I1101 11:27:35.001354    1225 scope.go:117] "RemoveContainer" containerID="cd33391b92cb08d4df0d63830b1771aa68f2b232872247b30678527a8de4b2d5"
	Nov 01 11:27:35 default-k8s-diff-port-287419 kubelet[1225]: E1101 11:27:35.001539    1225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7pccn_kubernetes-dashboard(7f271eef-bded-49cd-b5a1-a618ebebcfcb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7pccn" podUID="7f271eef-bded-49cd-b5a1-a618ebebcfcb"
	Nov 01 11:27:37 default-k8s-diff-port-287419 kubelet[1225]: E1101 11:27:37.002477    1225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-zmbnr" podUID="ffa3dd51-bf02-44da-800d-f8d714bc1b36"
	Nov 01 11:27:44 default-k8s-diff-port-287419 kubelet[1225]: E1101 11:27:44.147111    1225 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761996464146670019  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Nov 01 11:27:44 default-k8s-diff-port-287419 kubelet[1225]: E1101 11:27:44.147131    1225 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761996464146670019  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Nov 01 11:27:46 default-k8s-diff-port-287419 kubelet[1225]: I1101 11:27:46.000512    1225 scope.go:117] "RemoveContainer" containerID="cd33391b92cb08d4df0d63830b1771aa68f2b232872247b30678527a8de4b2d5"
	Nov 01 11:27:46 default-k8s-diff-port-287419 kubelet[1225]: E1101 11:27:46.000653    1225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7pccn_kubernetes-dashboard(7f271eef-bded-49cd-b5a1-a618ebebcfcb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7pccn" podUID="7f271eef-bded-49cd-b5a1-a618ebebcfcb"
	Nov 01 11:27:51 default-k8s-diff-port-287419 kubelet[1225]: E1101 11:27:51.002603    1225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-zmbnr" podUID="ffa3dd51-bf02-44da-800d-f8d714bc1b36"
	Nov 01 11:27:54 default-k8s-diff-port-287419 kubelet[1225]: E1101 11:27:54.151116    1225 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761996474149535259  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Nov 01 11:27:54 default-k8s-diff-port-287419 kubelet[1225]: E1101 11:27:54.151225    1225 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761996474149535259  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Nov 01 11:27:59 default-k8s-diff-port-287419 kubelet[1225]: I1101 11:27:59.001038    1225 scope.go:117] "RemoveContainer" containerID="cd33391b92cb08d4df0d63830b1771aa68f2b232872247b30678527a8de4b2d5"
	Nov 01 11:27:59 default-k8s-diff-port-287419 kubelet[1225]: E1101 11:27:59.001191    1225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7pccn_kubernetes-dashboard(7f271eef-bded-49cd-b5a1-a618ebebcfcb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7pccn" podUID="7f271eef-bded-49cd-b5a1-a618ebebcfcb"
	Nov 01 11:28:02 default-k8s-diff-port-287419 kubelet[1225]: E1101 11:28:02.005954    1225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-zmbnr" podUID="ffa3dd51-bf02-44da-800d-f8d714bc1b36"
	Nov 01 11:28:04 default-k8s-diff-port-287419 kubelet[1225]: E1101 11:28:04.152659    1225 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761996484152150205  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Nov 01 11:28:04 default-k8s-diff-port-287419 kubelet[1225]: E1101 11:28:04.152701    1225 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761996484152150205  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Nov 01 11:28:11 default-k8s-diff-port-287419 kubelet[1225]: I1101 11:28:11.001433    1225 scope.go:117] "RemoveContainer" containerID="cd33391b92cb08d4df0d63830b1771aa68f2b232872247b30678527a8de4b2d5"
	Nov 01 11:28:11 default-k8s-diff-port-287419 kubelet[1225]: E1101 11:28:11.001625    1225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7pccn_kubernetes-dashboard(7f271eef-bded-49cd-b5a1-a618ebebcfcb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7pccn" podUID="7f271eef-bded-49cd-b5a1-a618ebebcfcb"
	Nov 01 11:28:14 default-k8s-diff-port-287419 kubelet[1225]: E1101 11:28:14.004176    1225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-zmbnr" podUID="ffa3dd51-bf02-44da-800d-f8d714bc1b36"
	Nov 01 11:28:14 default-k8s-diff-port-287419 kubelet[1225]: E1101 11:28:14.154370    1225 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761996494154074469  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Nov 01 11:28:14 default-k8s-diff-port-287419 kubelet[1225]: E1101 11:28:14.154390    1225 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761996494154074469  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Nov 01 11:28:14 default-k8s-diff-port-287419 kubelet[1225]: E1101 11:28:14.755167    1225 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Nov 01 11:28:14 default-k8s-diff-port-287419 kubelet[1225]: E1101 11:28:14.755237    1225 kuberuntime_image.go:43] "Failed to pull image" err="initializing source docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Nov 01 11:28:14 default-k8s-diff-port-287419 kubelet[1225]: E1101 11:28:14.755330    1225 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-jt94t_kubernetes-dashboard(797f79dc-31d4-4da5-af7c-2b7c3c4d804b): ErrImagePull: initializing source docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 01 11:28:14 default-k8s-diff-port-287419 kubelet[1225]: E1101 11:28:14.755364    1225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"initializing source docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jt94t" podUID="797f79dc-31d4-4da5-af7c-2b7c3c4d804b"
	
	
	==> storage-provisioner [2444a95999378aeda01a073bdc97ff16fb844a2080b86a21c1bffecb72fdd394] <==
	I1101 11:19:00.841344       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 11:19:30.866234       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [838afc0402cc44a7baca900d97efe9a53459c9a5fa48b14c8b5b7ee572673b34] <==
	W1101 11:27:51.784389       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:27:53.787356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:27:53.792551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:27:55.796337       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:27:55.802128       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:27:57.805439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:27:57.813399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:27:59.816234       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:27:59.821120       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:28:01.825323       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:28:01.834468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:28:03.839502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:28:03.853766       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:28:05.858094       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:28:05.868601       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:28:07.872698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:28:07.877965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:28:09.880969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:28:09.886154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:28:11.889743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:28:11.895203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:28:13.898419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:28:13.904351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:28:15.910608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:28:15.920992       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-287419 -n default-k8s-diff-port-287419
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-287419 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-zmbnr kubernetes-dashboard-855c9754f9-jt94t
helpers_test.go:282: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-287419 describe pod metrics-server-746fcd58dc-zmbnr kubernetes-dashboard-855c9754f9-jt94t
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-287419 describe pod metrics-server-746fcd58dc-zmbnr kubernetes-dashboard-855c9754f9-jt94t: exit status 1 (58.75394ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-zmbnr" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-jt94t" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context default-k8s-diff-port-287419 describe pod metrics-server-746fcd58dc-zmbnr kubernetes-dashboard-855c9754f9-jt94t: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-jt94t" [797f79dc-31d4-4da5-af7c-2b7c3c4d804b] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1101 11:28:47.424467   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/enable-default-cni-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:29:01.114719   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/flannel-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:29:36.721274   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/old-k8s-version-918459/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:29:41.471839   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/bridge-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:30:32.231781   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:30:59.873659   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/no-preload-294319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:31:50.115702   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:31:53.852035   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/kindnet-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:31:59.846303   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/functional-950389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:32:25.971294   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/calico-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:32:29.153722   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:32:44.983371   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/custom-flannel-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:33:13.180421   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:33:16.915329   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/kindnet-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:33:22.923211   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/functional-950389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:33:47.424992   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/enable-default-cni-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:33:49.036172   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/calico-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:34:01.114227   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/flannel-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:34:08.048846   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/custom-flannel-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:34:36.721589   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/old-k8s-version-918459/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:34:41.471559   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/bridge-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:35:10.492100   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/enable-default-cni-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:35:24.180011   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/flannel-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:35:59.787975   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/old-k8s-version-918459/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:35:59.873839   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/no-preload-294319/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:36:04.537011   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/bridge-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:36:50.115340   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:36:53.851915   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/kindnet-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:36:59.846716   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/functional-950389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:285: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-287419 -n default-k8s-diff-port-287419
start_stop_delete_test.go:285: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-11-01 11:37:17.129648101 +0000 UTC m=+6451.871856077
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-287419 describe po kubernetes-dashboard-855c9754f9-jt94t -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) kubectl --context default-k8s-diff-port-287419 describe po kubernetes-dashboard-855c9754f9-jt94t -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-jt94t
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             default-k8s-diff-port-287419/192.168.72.189
Start Time:       Sat, 01 Nov 2025 11:19:10 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vgm2k (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-vgm2k:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age                   From               Message
----     ------            ----                  ----               -------
Warning  FailedScheduling  18m                   default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
Normal   Scheduled         18m                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jt94t to default-k8s-diff-port-287419
Warning  Failed            15m (x2 over 17m)     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": reading manifest sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling           13m (x5 over 18m)     kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed            12m (x5 over 17m)     kubelet            Error: ErrImagePull
Warning  Failed            12m (x3 over 16m)     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": fetching target platform image selected from manifest list: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff           3m4s (x44 over 17m)   kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed            2m22s (x47 over 17m)  kubelet            Error: ImagePullBackOff
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-287419 logs kubernetes-dashboard-855c9754f9-jt94t -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-287419 logs kubernetes-dashboard-855c9754f9-jt94t -n kubernetes-dashboard: exit status 1 (74.944193ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-jt94t" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:285: kubectl --context default-k8s-diff-port-287419 logs kubernetes-dashboard-855c9754f9-jt94t -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-287419 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-287419 -n default-k8s-diff-port-287419
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-287419 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-287419 logs -n 25: (1.320594306s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                    ARGS                                                                                                                     │      PROFILE       │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p newest-cni-268638 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-268638  │ jenkins │ v1.37.0 │ 01 Nov 25 11:18 UTC │ 01 Nov 25 11:19 UTC │
	│ image   │ no-preload-294319 image list --format=json                                                                                                                                                                                                  │ no-preload-294319  │ jenkins │ v1.37.0 │ 01 Nov 25 11:18 UTC │ 01 Nov 25 11:18 UTC │
	│ pause   │ -p no-preload-294319 --alsologtostderr -v=1                                                                                                                                                                                                 │ no-preload-294319  │ jenkins │ v1.37.0 │ 01 Nov 25 11:18 UTC │ 01 Nov 25 11:19 UTC │
	│ unpause │ -p no-preload-294319 --alsologtostderr -v=1                                                                                                                                                                                                 │ no-preload-294319  │ jenkins │ v1.37.0 │ 01 Nov 25 11:19 UTC │ 01 Nov 25 11:19 UTC │
	│ delete  │ -p no-preload-294319                                                                                                                                                                                                                        │ no-preload-294319  │ jenkins │ v1.37.0 │ 01 Nov 25 11:19 UTC │ 01 Nov 25 11:19 UTC │
	│ delete  │ -p no-preload-294319                                                                                                                                                                                                                        │ no-preload-294319  │ jenkins │ v1.37.0 │ 01 Nov 25 11:19 UTC │ 01 Nov 25 11:19 UTC │
	│ ssh     │ guest-290834 ssh df -t ext4 /data | grep /data                                                                                                                                                                                              │ guest-290834       │ jenkins │ v1.37.0 │ 01 Nov 25 11:19 UTC │ 01 Nov 25 11:19 UTC │
	│ ssh     │ guest-290834 ssh df -t ext4 /var/lib/minikube | grep /var/lib/minikube                                                                                                                                                                      │ guest-290834       │ jenkins │ v1.37.0 │ 01 Nov 25 11:19 UTC │ 01 Nov 25 11:19 UTC │
	│ ssh     │ guest-290834 ssh df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker                                                                                                                                                                │ guest-290834       │ jenkins │ v1.37.0 │ 01 Nov 25 11:19 UTC │ 01 Nov 25 11:19 UTC │
	│ ssh     │ guest-290834 ssh df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox                                                                                                                                                                        │ guest-290834       │ jenkins │ v1.37.0 │ 01 Nov 25 11:19 UTC │ 01 Nov 25 11:19 UTC │
	│ ssh     │ guest-290834 ssh df -t ext4 /var/lib/cni | grep /var/lib/cni                                                                                                                                                                                │ guest-290834       │ jenkins │ v1.37.0 │ 01 Nov 25 11:19 UTC │ 01 Nov 25 11:19 UTC │
	│ ssh     │ guest-290834 ssh df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet                                                                                                                                                                        │ guest-290834       │ jenkins │ v1.37.0 │ 01 Nov 25 11:19 UTC │ 01 Nov 25 11:19 UTC │
	│ ssh     │ guest-290834 ssh df -t ext4 /var/lib/docker | grep /var/lib/docker                                                                                                                                                                          │ guest-290834       │ jenkins │ v1.37.0 │ 01 Nov 25 11:19 UTC │ 01 Nov 25 11:19 UTC │
	│ ssh     │ guest-290834 ssh test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'                                                                                                                                                           │ guest-290834       │ jenkins │ v1.37.0 │ 01 Nov 25 11:19 UTC │ 01 Nov 25 11:19 UTC │
	│ delete  │ -p guest-290834                                                                                                                                                                                                                             │ guest-290834       │ jenkins │ v1.37.0 │ 01 Nov 25 11:19 UTC │ 01 Nov 25 11:19 UTC │
	│ image   │ embed-certs-571864 image list --format=json                                                                                                                                                                                                 │ embed-certs-571864 │ jenkins │ v1.37.0 │ 01 Nov 25 11:19 UTC │ 01 Nov 25 11:19 UTC │
	│ pause   │ -p embed-certs-571864 --alsologtostderr -v=1                                                                                                                                                                                                │ embed-certs-571864 │ jenkins │ v1.37.0 │ 01 Nov 25 11:19 UTC │ 01 Nov 25 11:19 UTC │
	│ unpause │ -p embed-certs-571864 --alsologtostderr -v=1                                                                                                                                                                                                │ embed-certs-571864 │ jenkins │ v1.37.0 │ 01 Nov 25 11:19 UTC │ 01 Nov 25 11:19 UTC │
	│ delete  │ -p embed-certs-571864                                                                                                                                                                                                                       │ embed-certs-571864 │ jenkins │ v1.37.0 │ 01 Nov 25 11:19 UTC │ 01 Nov 25 11:19 UTC │
	│ delete  │ -p embed-certs-571864                                                                                                                                                                                                                       │ embed-certs-571864 │ jenkins │ v1.37.0 │ 01 Nov 25 11:19 UTC │ 01 Nov 25 11:19 UTC │
	│ image   │ newest-cni-268638 image list --format=json                                                                                                                                                                                                  │ newest-cni-268638  │ jenkins │ v1.37.0 │ 01 Nov 25 11:19 UTC │ 01 Nov 25 11:19 UTC │
	│ pause   │ -p newest-cni-268638 --alsologtostderr -v=1                                                                                                                                                                                                 │ newest-cni-268638  │ jenkins │ v1.37.0 │ 01 Nov 25 11:19 UTC │ 01 Nov 25 11:19 UTC │
	│ unpause │ -p newest-cni-268638 --alsologtostderr -v=1                                                                                                                                                                                                 │ newest-cni-268638  │ jenkins │ v1.37.0 │ 01 Nov 25 11:19 UTC │ 01 Nov 25 11:19 UTC │
	│ delete  │ -p newest-cni-268638                                                                                                                                                                                                                        │ newest-cni-268638  │ jenkins │ v1.37.0 │ 01 Nov 25 11:19 UTC │ 01 Nov 25 11:19 UTC │
	│ delete  │ -p newest-cni-268638                                                                                                                                                                                                                        │ newest-cni-268638  │ jenkins │ v1.37.0 │ 01 Nov 25 11:19 UTC │ 01 Nov 25 11:19 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 11:18:26
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 11:18:26.575966  119309 out.go:360] Setting OutFile to fd 1 ...
	I1101 11:18:26.576303  119309 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:18:26.576316  119309 out.go:374] Setting ErrFile to fd 2...
	I1101 11:18:26.576323  119309 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:18:26.576668  119309 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-70113/.minikube/bin
	I1101 11:18:26.577276  119309 out.go:368] Setting JSON to false
	I1101 11:18:26.578558  119309 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10855,"bootTime":1761985052,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 11:18:26.578686  119309 start.go:143] virtualization: kvm guest
	I1101 11:18:26.581032  119309 out.go:179] * [newest-cni-268638] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 11:18:26.582374  119309 notify.go:221] Checking for updates...
	I1101 11:18:26.582382  119309 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 11:18:26.584687  119309 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 11:18:26.586092  119309 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-70113/kubeconfig
	I1101 11:18:26.590942  119309 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-70113/.minikube
	I1101 11:18:26.592615  119309 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 11:18:26.593782  119309 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 11:18:26.595639  119309 config.go:182] Loaded profile config "newest-cni-268638": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:18:26.596410  119309 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 11:18:26.650853  119309 out.go:179] * Using the kvm2 driver based on existing profile
	I1101 11:18:26.653013  119309 start.go:309] selected driver: kvm2
	I1101 11:18:26.653037  119309 start.go:930] validating driver "kvm2" against &{Name:newest-cni-268638 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:newest-cni-268638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.241 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s
ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:18:26.653229  119309 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 11:18:26.654941  119309 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 11:18:26.655015  119309 cni.go:84] Creating CNI manager for ""
	I1101 11:18:26.655102  119309 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 11:18:26.655172  119309 start.go:353] cluster config:
	{Name:newest-cni-268638 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-268638 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.241 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeR
equested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:18:26.655313  119309 iso.go:125] acquiring lock: {Name:mk49d9a272bb99d336f82dfc5631a4c8ce9271c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 11:18:26.657121  119309 out.go:179] * Starting "newest-cni-268638" primary control-plane node in "newest-cni-268638" cluster
	I1101 11:18:24.257509  118233 cri.go:89] found id: ""
	I1101 11:18:24.257589  118233 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 11:18:24.282166  118233 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 11:18:24.282196  118233 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 11:18:24.282259  118233 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 11:18:24.297617  118233 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 11:18:24.298262  118233 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-294319" does not appear in /home/jenkins/minikube-integration/21830-70113/kubeconfig
	I1101 11:18:24.298591  118233 kubeconfig.go:62] /home/jenkins/minikube-integration/21830-70113/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-294319" cluster setting kubeconfig missing "no-preload-294319" context setting]
	I1101 11:18:24.299168  118233 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/kubeconfig: {Name:mk1fa0677a0758214359bfdd6f326495ee5fd60e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:18:24.300823  118233 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 11:18:24.320723  118233 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.49
	I1101 11:18:24.320766  118233 kubeadm.go:1161] stopping kube-system containers ...
	I1101 11:18:24.320783  118233 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1101 11:18:24.320845  118233 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 11:18:24.394074  118233 cri.go:89] found id: ""
	I1101 11:18:24.394163  118233 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 11:18:24.421617  118233 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 11:18:24.435632  118233 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 11:18:24.435657  118233 kubeadm.go:158] found existing configuration files:
	
	I1101 11:18:24.435708  118233 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 11:18:24.454470  118233 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 11:18:24.454579  118233 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 11:18:24.473401  118233 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 11:18:24.492090  118233 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 11:18:24.492178  118233 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 11:18:24.509757  118233 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 11:18:24.527399  118233 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 11:18:24.527492  118233 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 11:18:24.544597  118233 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 11:18:24.558312  118233 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 11:18:24.558380  118233 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 11:18:24.575629  118233 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 11:18:24.590163  118233 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:18:24.768738  118233 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:18:26.498750  118233 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.729965792s)
	I1101 11:18:26.498832  118233 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:18:26.884044  118233 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:18:27.035583  118233 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:18:27.219246  118233 api_server.go:52] waiting for apiserver process to appear ...
	I1101 11:18:27.219341  118233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:18:27.719611  118233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:18:28.219499  118233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:18:28.720342  118233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:18:28.777475  118233 api_server.go:72] duration metric: took 1.558241424s to wait for apiserver process to appear ...
	I1101 11:18:28.777506  118233 api_server.go:88] waiting for apiserver healthz status ...
	I1101 11:18:28.777527  118233 api_server.go:253] Checking apiserver healthz at https://192.168.39.49:8443/healthz ...
	I1101 11:18:28.779007  118233 api_server.go:269] stopped: https://192.168.39.49:8443/healthz: Get "https://192.168.39.49:8443/healthz": dial tcp 192.168.39.49:8443: connect: connection refused
	I1101 11:18:25.532670  118797 main.go:143] libmachine: domain embed-certs-571864 has defined MAC address 52:54:00:d1:0a:19 in network mk-embed-certs-571864
	I1101 11:18:25.533256  118797 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d1:0a:19", ip: ""} in network mk-embed-certs-571864: {Iface:virbr3 ExpiryTime:2025-11-01 12:18:17 +0000 UTC Type:0 Mac:52:54:00:d1:0a:19 Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:embed-certs-571864 Clientid:01:52:54:00:d1:0a:19}
	I1101 11:18:25.533311  118797 main.go:143] libmachine: domain embed-certs-571864 has defined IP address 192.168.61.132 and MAC address 52:54:00:d1:0a:19 in network mk-embed-certs-571864
	I1101 11:18:25.533576  118797 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1101 11:18:25.538934  118797 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 11:18:25.557628  118797 kubeadm.go:884] updating cluster {Name:embed-certs-571864 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.34.1 ClusterName:embed-certs-571864 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.132 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNode
Requested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 11:18:25.557794  118797 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 11:18:25.557859  118797 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 11:18:25.610123  118797 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1101 11:18:25.610214  118797 ssh_runner.go:195] Run: which lz4
	I1101 11:18:25.615610  118797 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1101 11:18:25.621258  118797 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 11:18:25.621295  118797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1101 11:18:27.537982  118797 crio.go:462] duration metric: took 1.922402517s to copy over tarball
	I1101 11:18:27.538059  118797 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 11:18:29.590006  118797 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.051908617s)
	I1101 11:18:29.590067  118797 crio.go:469] duration metric: took 2.052053158s to extract the tarball
	I1101 11:18:29.590078  118797 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 11:18:29.645938  118797 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 11:18:29.707543  118797 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 11:18:29.707578  118797 cache_images.go:86] Images are preloaded, skipping loading
	I1101 11:18:29.707590  118797 kubeadm.go:935] updating node { 192.168.61.132 8443 v1.34.1 crio true true} ...
	I1101 11:18:29.707732  118797 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-571864 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.132
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-571864 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 11:18:29.707844  118797 ssh_runner.go:195] Run: crio config
	I1101 11:18:29.778473  118797 cni.go:84] Creating CNI manager for ""
	I1101 11:18:29.778498  118797 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 11:18:29.778515  118797 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 11:18:29.778561  118797 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.132 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-571864 NodeName:embed-certs-571864 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.132"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.132 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 11:18:29.778754  118797 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.132
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-571864"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.132"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.132"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 11:18:29.778834  118797 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 11:18:29.793364  118797 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 11:18:29.793443  118797 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 11:18:29.811009  118797 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1101 11:18:29.843479  118797 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 11:18:29.876040  118797 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1101 11:18:29.903565  118797 ssh_runner.go:195] Run: grep 192.168.61.132	control-plane.minikube.internal$ /etc/hosts
	I1101 11:18:29.908600  118797 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.132	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 11:18:29.932848  118797 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:18:28.216807  119092 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.72.189:22: connect: no route to host
	I1101 11:18:26.658521  119309 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 11:18:26.658591  119309 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-70113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 11:18:26.658604  119309 cache.go:59] Caching tarball of preloaded images
	I1101 11:18:26.658730  119309 preload.go:233] Found /home/jenkins/minikube-integration/21830-70113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 11:18:26.658752  119309 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 11:18:26.658903  119309 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/newest-cni-268638/config.json ...
	I1101 11:18:26.659211  119309 start.go:360] acquireMachinesLock for newest-cni-268638: {Name:mk53a05d125fe91ead2a39c6bbf2ba926c471e2f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 11:18:29.277988  118233 api_server.go:253] Checking apiserver healthz at https://192.168.39.49:8443/healthz ...
	I1101 11:18:31.834410  118233 api_server.go:279] https://192.168.39.49:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 11:18:31.834451  118233 api_server.go:103] status: https://192.168.39.49:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 11:18:31.834472  118233 api_server.go:253] Checking apiserver healthz at https://192.168.39.49:8443/healthz ...
	I1101 11:18:31.894951  118233 api_server.go:279] https://192.168.39.49:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 11:18:31.894986  118233 api_server.go:103] status: https://192.168.39.49:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 11:18:32.278625  118233 api_server.go:253] Checking apiserver healthz at https://192.168.39.49:8443/healthz ...
	I1101 11:18:32.288324  118233 api_server.go:279] https://192.168.39.49:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:18:32.288353  118233 api_server.go:103] status: https://192.168.39.49:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:18:32.777914  118233 api_server.go:253] Checking apiserver healthz at https://192.168.39.49:8443/healthz ...
	I1101 11:18:32.783509  118233 api_server.go:279] https://192.168.39.49:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:18:32.783554  118233 api_server.go:103] status: https://192.168.39.49:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:18:33.278329  118233 api_server.go:253] Checking apiserver healthz at https://192.168.39.49:8443/healthz ...
	I1101 11:18:33.284284  118233 api_server.go:279] https://192.168.39.49:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:18:33.284322  118233 api_server.go:103] status: https://192.168.39.49:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:18:33.778876  118233 api_server.go:253] Checking apiserver healthz at https://192.168.39.49:8443/healthz ...
	I1101 11:18:33.784229  118233 api_server.go:279] https://192.168.39.49:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:18:33.784328  118233 api_server.go:103] status: https://192.168.39.49:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:18:34.277929  118233 api_server.go:253] Checking apiserver healthz at https://192.168.39.49:8443/healthz ...
	I1101 11:18:34.287409  118233 api_server.go:279] https://192.168.39.49:8443/healthz returned 200:
	ok
	I1101 11:18:34.297825  118233 api_server.go:141] control plane version: v1.34.1
	I1101 11:18:34.297862  118233 api_server.go:131] duration metric: took 5.520347755s to wait for apiserver health ...
	I1101 11:18:34.297878  118233 cni.go:84] Creating CNI manager for ""
	I1101 11:18:34.297888  118233 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 11:18:34.299237  118233 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 11:18:34.300556  118233 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 11:18:34.322317  118233 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1101 11:18:34.361848  118233 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 11:18:34.368029  118233 system_pods.go:59] 8 kube-system pods found
	I1101 11:18:34.368077  118233 system_pods.go:61] "coredns-66bc5c9577-x57vz" [eb2f3b71-41f2-4ae3-ac71-9ccc871abfc8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 11:18:34.368091  118233 system_pods.go:61] "etcd-no-preload-294319" [f4aadb8a-a6a7-4936-98fa-6e662ff2471d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 11:18:34.368112  118233 system_pods.go:61] "kube-apiserver-no-preload-294319" [fe68f1cd-151d-472c-955d-6c425117c91d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 11:18:34.368123  118233 system_pods.go:61] "kube-controller-manager-no-preload-294319" [efb452de-2f7a-4212-96c5-e5a8780b7694] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 11:18:34.368145  118233 system_pods.go:61] "kube-proxy-2qfgw" [f2d91d64-ec0c-45bf-bf3d-23b5dd8a78e4] Running
	I1101 11:18:34.368154  118233 system_pods.go:61] "kube-scheduler-no-preload-294319" [ec579a88-3103-48ff-b1cf-3463d6080e8a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 11:18:34.368167  118233 system_pods.go:61] "metrics-server-746fcd58dc-dn4qd" [27a30dc7-b5c2-4eae-979d-72266debe708] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 11:18:34.368182  118233 system_pods.go:61] "storage-provisioner" [3af75b2c-851c-4786-8aab-77980cca46b5] Running
	I1101 11:18:34.368192  118233 system_pods.go:74] duration metric: took 6.314947ms to wait for pod list to return data ...
	I1101 11:18:34.368208  118233 node_conditions.go:102] verifying NodePressure condition ...
	I1101 11:18:34.375580  118233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 11:18:34.375616  118233 node_conditions.go:123] node cpu capacity is 2
	I1101 11:18:34.375634  118233 node_conditions.go:105] duration metric: took 7.419177ms to run NodePressure ...
	I1101 11:18:34.375700  118233 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:18:34.745366  118233 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1101 11:18:34.751497  118233 kubeadm.go:744] kubelet initialised
	I1101 11:18:34.751551  118233 kubeadm.go:745] duration metric: took 6.134966ms waiting for restarted kubelet to initialise ...
	I1101 11:18:34.751577  118233 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 11:18:34.778054  118233 ops.go:34] apiserver oom_adj: -16
	I1101 11:18:34.778088  118233 kubeadm.go:602] duration metric: took 10.495882668s to restartPrimaryControlPlane
	I1101 11:18:34.778100  118233 kubeadm.go:403] duration metric: took 10.586894339s to StartCluster
	I1101 11:18:34.778122  118233 settings.go:142] acquiring lock: {Name:mk26e3d3b2448df59827bb1be60cde1d117dbc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:18:34.778205  118233 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-70113/kubeconfig
	I1101 11:18:34.779356  118233 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/kubeconfig: {Name:mk1fa0677a0758214359bfdd6f326495ee5fd60e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:18:34.779671  118233 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.49 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 11:18:34.779963  118233 config.go:182] Loaded profile config "no-preload-294319": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:18:34.780027  118233 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 11:18:34.780110  118233 addons.go:70] Setting storage-provisioner=true in profile "no-preload-294319"
	I1101 11:18:34.780146  118233 addons.go:239] Setting addon storage-provisioner=true in "no-preload-294319"
	W1101 11:18:34.780154  118233 addons.go:248] addon storage-provisioner should already be in state true
	I1101 11:18:34.780180  118233 host.go:66] Checking if "no-preload-294319" exists ...
	I1101 11:18:34.780204  118233 addons.go:70] Setting default-storageclass=true in profile "no-preload-294319"
	I1101 11:18:34.780226  118233 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-294319"
	I1101 11:18:34.780235  118233 addons.go:70] Setting dashboard=true in profile "no-preload-294319"
	I1101 11:18:34.780259  118233 addons.go:239] Setting addon dashboard=true in "no-preload-294319"
	W1101 11:18:34.780269  118233 addons.go:248] addon dashboard should already be in state true
	I1101 11:18:34.780271  118233 addons.go:70] Setting metrics-server=true in profile "no-preload-294319"
	I1101 11:18:34.780289  118233 addons.go:239] Setting addon metrics-server=true in "no-preload-294319"
	W1101 11:18:34.780296  118233 addons.go:248] addon metrics-server should already be in state true
	I1101 11:18:34.780299  118233 host.go:66] Checking if "no-preload-294319" exists ...
	I1101 11:18:34.780317  118233 host.go:66] Checking if "no-preload-294319" exists ...
	I1101 11:18:34.781686  118233 out.go:179] * Verifying Kubernetes components...
	I1101 11:18:34.783157  118233 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:18:34.784689  118233 addons.go:239] Setting addon default-storageclass=true in "no-preload-294319"
	W1101 11:18:34.784710  118233 addons.go:248] addon default-storageclass should already be in state true
	I1101 11:18:34.784734  118233 host.go:66] Checking if "no-preload-294319" exists ...
	I1101 11:18:34.786058  118233 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 11:18:34.786098  118233 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1101 11:18:34.787053  118233 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 11:18:34.787074  118233 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 11:18:34.787211  118233 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 11:18:34.788053  118233 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 11:18:34.788073  118233 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 11:18:34.788258  118233 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 11:18:34.788276  118233 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 11:18:34.790158  118233 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 11:18:30.080717  118797 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:18:30.117878  118797 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/embed-certs-571864 for IP: 192.168.61.132
	I1101 11:18:30.117910  118797 certs.go:195] generating shared ca certs ...
	I1101 11:18:30.117933  118797 certs.go:227] acquiring lock for ca certs: {Name:mk20731b316fbc22c351241cefc40924880eeba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:18:30.118138  118797 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-70113/.minikube/ca.key
	I1101 11:18:30.118199  118797 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.key
	I1101 11:18:30.118214  118797 certs.go:257] generating profile certs ...
	I1101 11:18:30.118347  118797 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/embed-certs-571864/client.key
	I1101 11:18:30.118456  118797 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/embed-certs-571864/apiserver.key.883be73b
	I1101 11:18:30.118556  118797 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/embed-certs-571864/proxy-client.key
	I1101 11:18:30.118806  118797 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/73998.pem (1338 bytes)
	W1101 11:18:30.118861  118797 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-70113/.minikube/certs/73998_empty.pem, impossibly tiny 0 bytes
	I1101 11:18:30.118874  118797 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 11:18:30.118911  118797 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem (1082 bytes)
	I1101 11:18:30.118950  118797 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/cert.pem (1123 bytes)
	I1101 11:18:30.118990  118797 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/key.pem (1675 bytes)
	I1101 11:18:30.119080  118797 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/files/etc/ssl/certs/739982.pem (1708 bytes)
	I1101 11:18:30.120035  118797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 11:18:30.179115  118797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 11:18:30.223455  118797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 11:18:30.260204  118797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 11:18:30.299705  118797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/embed-certs-571864/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1101 11:18:30.343072  118797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/embed-certs-571864/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 11:18:30.387777  118797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/embed-certs-571864/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 11:18:30.437828  118797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/embed-certs-571864/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 11:18:30.483847  118797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/certs/73998.pem --> /usr/share/ca-certificates/73998.pem (1338 bytes)
	I1101 11:18:30.522708  118797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/files/etc/ssl/certs/739982.pem --> /usr/share/ca-certificates/739982.pem (1708 bytes)
	I1101 11:18:30.568043  118797 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 11:18:30.610967  118797 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 11:18:30.638495  118797 ssh_runner.go:195] Run: openssl version
	I1101 11:18:30.646344  118797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/73998.pem && ln -fs /usr/share/ca-certificates/73998.pem /etc/ssl/certs/73998.pem"
	I1101 11:18:30.665487  118797 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/73998.pem
	I1101 11:18:30.673863  118797 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:03 /usr/share/ca-certificates/73998.pem
	I1101 11:18:30.673935  118797 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/73998.pem
	I1101 11:18:30.685608  118797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/73998.pem /etc/ssl/certs/51391683.0"
	I1101 11:18:30.702888  118797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/739982.pem && ln -fs /usr/share/ca-certificates/739982.pem /etc/ssl/certs/739982.pem"
	I1101 11:18:30.724178  118797 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/739982.pem
	I1101 11:18:30.732804  118797 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:03 /usr/share/ca-certificates/739982.pem
	I1101 11:18:30.732878  118797 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/739982.pem
	I1101 11:18:30.744302  118797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/739982.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 11:18:30.764295  118797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 11:18:30.780037  118797 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:18:30.788009  118797 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:50 /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:18:30.788096  118797 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:18:30.796430  118797 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 11:18:30.820463  118797 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 11:18:30.829981  118797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 11:18:30.844019  118797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 11:18:30.859517  118797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 11:18:30.872995  118797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 11:18:30.885149  118797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 11:18:30.895855  118797 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 11:18:30.909720  118797 kubeadm.go:401] StartCluster: {Name:embed-certs-571864 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.1 ClusterName:embed-certs-571864 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.132 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:18:30.909846  118797 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 11:18:30.909940  118797 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 11:18:30.966235  118797 cri.go:89] found id: ""
	I1101 11:18:30.966330  118797 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 11:18:30.984720  118797 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 11:18:30.984748  118797 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 11:18:30.984851  118797 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 11:18:30.999216  118797 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 11:18:30.999964  118797 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-571864" does not appear in /home/jenkins/minikube-integration/21830-70113/kubeconfig
	I1101 11:18:31.000276  118797 kubeconfig.go:62] /home/jenkins/minikube-integration/21830-70113/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-571864" cluster setting kubeconfig missing "embed-certs-571864" context setting]
	I1101 11:18:31.000880  118797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/kubeconfig: {Name:mk1fa0677a0758214359bfdd6f326495ee5fd60e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:18:31.074119  118797 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 11:18:31.088856  118797 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.61.132
	I1101 11:18:31.088902  118797 kubeadm.go:1161] stopping kube-system containers ...
	I1101 11:18:31.088917  118797 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1101 11:18:31.088992  118797 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 11:18:31.141718  118797 cri.go:89] found id: ""
	I1101 11:18:31.141802  118797 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 11:18:31.168517  118797 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 11:18:31.186956  118797 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 11:18:31.186983  118797 kubeadm.go:158] found existing configuration files:
	
	I1101 11:18:31.187043  118797 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 11:18:31.204114  118797 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 11:18:31.204198  118797 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 11:18:31.221331  118797 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 11:18:31.240377  118797 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 11:18:31.240445  118797 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 11:18:31.258829  118797 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 11:18:31.277183  118797 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 11:18:31.277257  118797 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 11:18:31.291526  118797 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 11:18:31.304957  118797 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 11:18:31.305026  118797 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 11:18:31.319125  118797 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 11:18:31.332409  118797 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:18:31.413339  118797 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:18:32.850750  118797 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.437367853s)
	I1101 11:18:32.850827  118797 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:18:33.160969  118797 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:18:33.248837  118797 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:18:33.352582  118797 api_server.go:52] waiting for apiserver process to appear ...
	I1101 11:18:33.352690  118797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:18:33.853451  118797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:18:34.353702  118797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:18:34.853670  118797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:18:34.892010  118797 api_server.go:72] duration metric: took 1.539441132s to wait for apiserver process to appear ...
	I1101 11:18:34.892046  118797 api_server.go:88] waiting for apiserver healthz status ...
	I1101 11:18:34.892083  118797 api_server.go:253] Checking apiserver healthz at https://192.168.61.132:8443/healthz ...
	I1101 11:18:34.296799  119092 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.72.189:22: connect: no route to host
	I1101 11:18:34.791192  118233 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 11:18:34.791209  118233 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 11:18:34.791376  118233 main.go:143] libmachine: domain no-preload-294319 has defined MAC address 52:54:00:ee:b7:c6 in network mk-no-preload-294319
	I1101 11:18:34.792320  118233 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ee:b7:c6", ip: ""} in network mk-no-preload-294319: {Iface:virbr1 ExpiryTime:2025-11-01 12:17:58 +0000 UTC Type:0 Mac:52:54:00:ee:b7:c6 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:no-preload-294319 Clientid:01:52:54:00:ee:b7:c6}
	I1101 11:18:34.792363  118233 main.go:143] libmachine: domain no-preload-294319 has defined IP address 192.168.39.49 and MAC address 52:54:00:ee:b7:c6 in network mk-no-preload-294319
	I1101 11:18:34.793325  118233 main.go:143] libmachine: domain no-preload-294319 has defined MAC address 52:54:00:ee:b7:c6 in network mk-no-preload-294319
	I1101 11:18:34.793582  118233 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/no-preload-294319/id_rsa Username:docker}
	I1101 11:18:34.794950  118233 main.go:143] libmachine: domain no-preload-294319 has defined MAC address 52:54:00:ee:b7:c6 in network mk-no-preload-294319
	I1101 11:18:34.795199  118233 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ee:b7:c6", ip: ""} in network mk-no-preload-294319: {Iface:virbr1 ExpiryTime:2025-11-01 12:17:58 +0000 UTC Type:0 Mac:52:54:00:ee:b7:c6 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:no-preload-294319 Clientid:01:52:54:00:ee:b7:c6}
	I1101 11:18:34.795238  118233 main.go:143] libmachine: domain no-preload-294319 has defined IP address 192.168.39.49 and MAC address 52:54:00:ee:b7:c6 in network mk-no-preload-294319
	I1101 11:18:34.795620  118233 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/no-preload-294319/id_rsa Username:docker}
	I1101 11:18:34.795647  118233 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ee:b7:c6", ip: ""} in network mk-no-preload-294319: {Iface:virbr1 ExpiryTime:2025-11-01 12:17:58 +0000 UTC Type:0 Mac:52:54:00:ee:b7:c6 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:no-preload-294319 Clientid:01:52:54:00:ee:b7:c6}
	I1101 11:18:34.795673  118233 main.go:143] libmachine: domain no-preload-294319 has defined IP address 192.168.39.49 and MAC address 52:54:00:ee:b7:c6 in network mk-no-preload-294319
	I1101 11:18:34.796227  118233 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/no-preload-294319/id_rsa Username:docker}
	I1101 11:18:34.797218  118233 main.go:143] libmachine: domain no-preload-294319 has defined MAC address 52:54:00:ee:b7:c6 in network mk-no-preload-294319
	I1101 11:18:34.797645  118233 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ee:b7:c6", ip: ""} in network mk-no-preload-294319: {Iface:virbr1 ExpiryTime:2025-11-01 12:17:58 +0000 UTC Type:0 Mac:52:54:00:ee:b7:c6 Iaid: IPaddr:192.168.39.49 Prefix:24 Hostname:no-preload-294319 Clientid:01:52:54:00:ee:b7:c6}
	I1101 11:18:34.797675  118233 main.go:143] libmachine: domain no-preload-294319 has defined IP address 192.168.39.49 and MAC address 52:54:00:ee:b7:c6 in network mk-no-preload-294319
	I1101 11:18:34.797859  118233 sshutil.go:53] new ssh client: &{IP:192.168.39.49 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/no-preload-294319/id_rsa Username:docker}
	I1101 11:18:35.220385  118233 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:18:35.272999  118233 node_ready.go:35] waiting up to 6m0s for node "no-preload-294319" to be "Ready" ...
	I1101 11:18:35.278179  118233 node_ready.go:49] node "no-preload-294319" is "Ready"
	I1101 11:18:35.278213  118233 node_ready.go:38] duration metric: took 5.166878ms for node "no-preload-294319" to be "Ready" ...
	I1101 11:18:35.278233  118233 api_server.go:52] waiting for apiserver process to appear ...
	I1101 11:18:35.278309  118233 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:18:35.332707  118233 api_server.go:72] duration metric: took 552.992291ms to wait for apiserver process to appear ...
	I1101 11:18:35.332737  118233 api_server.go:88] waiting for apiserver healthz status ...
	I1101 11:18:35.332759  118233 api_server.go:253] Checking apiserver healthz at https://192.168.39.49:8443/healthz ...
	I1101 11:18:35.344415  118233 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 11:18:35.344448  118233 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 11:18:35.347678  118233 api_server.go:279] https://192.168.39.49:8443/healthz returned 200:
	ok
	I1101 11:18:35.350113  118233 api_server.go:141] control plane version: v1.34.1
	I1101 11:18:35.350140  118233 api_server.go:131] duration metric: took 17.395507ms to wait for apiserver health ...
	I1101 11:18:35.350150  118233 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 11:18:35.358801  118233 system_pods.go:59] 8 kube-system pods found
	I1101 11:18:35.358835  118233 system_pods.go:61] "coredns-66bc5c9577-x57vz" [eb2f3b71-41f2-4ae3-ac71-9ccc871abfc8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 11:18:35.358844  118233 system_pods.go:61] "etcd-no-preload-294319" [f4aadb8a-a6a7-4936-98fa-6e662ff2471d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 11:18:35.358866  118233 system_pods.go:61] "kube-apiserver-no-preload-294319" [fe68f1cd-151d-472c-955d-6c425117c91d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 11:18:35.358874  118233 system_pods.go:61] "kube-controller-manager-no-preload-294319" [efb452de-2f7a-4212-96c5-e5a8780b7694] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 11:18:35.358887  118233 system_pods.go:61] "kube-proxy-2qfgw" [f2d91d64-ec0c-45bf-bf3d-23b5dd8a78e4] Running
	I1101 11:18:35.358894  118233 system_pods.go:61] "kube-scheduler-no-preload-294319" [ec579a88-3103-48ff-b1cf-3463d6080e8a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 11:18:35.358901  118233 system_pods.go:61] "metrics-server-746fcd58dc-dn4qd" [27a30dc7-b5c2-4eae-979d-72266debe708] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 11:18:35.358911  118233 system_pods.go:61] "storage-provisioner" [3af75b2c-851c-4786-8aab-77980cca46b5] Running
	I1101 11:18:35.358918  118233 system_pods.go:74] duration metric: took 8.761322ms to wait for pod list to return data ...
	I1101 11:18:35.358927  118233 default_sa.go:34] waiting for default service account to be created ...
	I1101 11:18:35.363863  118233 default_sa.go:45] found service account: "default"
	I1101 11:18:35.363887  118233 default_sa.go:55] duration metric: took 4.950065ms for default service account to be created ...
	I1101 11:18:35.363897  118233 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 11:18:35.367716  118233 system_pods.go:86] 8 kube-system pods found
	I1101 11:18:35.367748  118233 system_pods.go:89] "coredns-66bc5c9577-x57vz" [eb2f3b71-41f2-4ae3-ac71-9ccc871abfc8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 11:18:35.367758  118233 system_pods.go:89] "etcd-no-preload-294319" [f4aadb8a-a6a7-4936-98fa-6e662ff2471d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 11:18:35.367769  118233 system_pods.go:89] "kube-apiserver-no-preload-294319" [fe68f1cd-151d-472c-955d-6c425117c91d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 11:18:35.367777  118233 system_pods.go:89] "kube-controller-manager-no-preload-294319" [efb452de-2f7a-4212-96c5-e5a8780b7694] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 11:18:35.367783  118233 system_pods.go:89] "kube-proxy-2qfgw" [f2d91d64-ec0c-45bf-bf3d-23b5dd8a78e4] Running
	I1101 11:18:35.367791  118233 system_pods.go:89] "kube-scheduler-no-preload-294319" [ec579a88-3103-48ff-b1cf-3463d6080e8a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 11:18:35.367804  118233 system_pods.go:89] "metrics-server-746fcd58dc-dn4qd" [27a30dc7-b5c2-4eae-979d-72266debe708] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 11:18:35.367814  118233 system_pods.go:89] "storage-provisioner" [3af75b2c-851c-4786-8aab-77980cca46b5] Running
	I1101 11:18:35.367825  118233 system_pods.go:126] duration metric: took 3.92079ms to wait for k8s-apps to be running ...
	I1101 11:18:35.367839  118233 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 11:18:35.367895  118233 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:18:35.404510  118233 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 11:18:35.404562  118233 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 11:18:35.405800  118233 system_svc.go:56] duration metric: took 37.952183ms WaitForService to wait for kubelet
	I1101 11:18:35.405826  118233 kubeadm.go:587] duration metric: took 626.118166ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 11:18:35.405847  118233 node_conditions.go:102] verifying NodePressure condition ...
	I1101 11:18:35.412815  118233 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 11:18:35.412842  118233 node_conditions.go:123] node cpu capacity is 2
	I1101 11:18:35.412879  118233 node_conditions.go:105] duration metric: took 7.02532ms to run NodePressure ...
	I1101 11:18:35.412895  118233 start.go:242] waiting for startup goroutines ...
	I1101 11:18:35.445510  118233 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 11:18:35.445567  118233 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 11:18:35.481864  118233 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 11:18:35.481896  118233 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 11:18:35.521069  118233 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 11:18:35.521101  118233 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 11:18:35.567949  118233 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 11:18:35.584421  118233 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 11:18:35.584452  118233 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 11:18:35.613219  118233 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 11:18:35.613245  118233 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1101 11:18:35.614247  118233 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 11:18:35.677563  118233 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 11:18:35.677594  118233 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 11:18:35.688305  118233 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 11:18:35.688351  118233 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 11:18:35.768271  118233 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 11:18:35.768298  118233 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 11:18:35.783761  118233 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 11:18:35.783805  118233 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 11:18:35.863966  118233 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 11:18:35.863999  118233 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 11:18:35.879046  118233 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 11:18:35.950088  118233 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 11:18:38.451481  118233 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.837193272s)
	I1101 11:18:38.452011  118233 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.884022662s)
	I1101 11:18:38.527182  118233 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.648090685s)
	I1101 11:18:38.527241  118233 addons.go:480] Verifying addon metrics-server=true in "no-preload-294319"
	I1101 11:18:38.570181  118233 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.620045696s)
	I1101 11:18:38.571964  118233 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-294319 addons enable metrics-server
	
	I1101 11:18:38.574144  118233 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1101 11:18:37.674209  118797 api_server.go:279] https://192.168.61.132:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 11:18:37.674269  118797 api_server.go:103] status: https://192.168.61.132:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 11:18:37.674291  118797 api_server.go:253] Checking apiserver healthz at https://192.168.61.132:8443/healthz ...
	I1101 11:18:37.780703  118797 api_server.go:279] https://192.168.61.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:18:37.780738  118797 api_server.go:103] status: https://192.168.61.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:18:37.893073  118797 api_server.go:253] Checking apiserver healthz at https://192.168.61.132:8443/healthz ...
	I1101 11:18:37.919844  118797 api_server.go:279] https://192.168.61.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:18:37.919953  118797 api_server.go:103] status: https://192.168.61.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:18:38.392229  118797 api_server.go:253] Checking apiserver healthz at https://192.168.61.132:8443/healthz ...
	I1101 11:18:38.441338  118797 api_server.go:279] https://192.168.61.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:18:38.441372  118797 api_server.go:103] status: https://192.168.61.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:18:38.893047  118797 api_server.go:253] Checking apiserver healthz at https://192.168.61.132:8443/healthz ...
	I1101 11:18:38.902100  118797 api_server.go:279] https://192.168.61.132:8443/healthz returned 200:
	ok
	I1101 11:18:38.911904  118797 api_server.go:141] control plane version: v1.34.1
	I1101 11:18:38.911943  118797 api_server.go:131] duration metric: took 4.01988854s to wait for apiserver health ...
	I1101 11:18:38.911958  118797 cni.go:84] Creating CNI manager for ""
	I1101 11:18:38.911967  118797 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 11:18:38.913955  118797 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 11:18:38.575619  118233 addons.go:515] duration metric: took 3.795593493s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1101 11:18:38.575662  118233 start.go:247] waiting for cluster config update ...
	I1101 11:18:38.575680  118233 start.go:256] writing updated cluster config ...
	I1101 11:18:38.575953  118233 ssh_runner.go:195] Run: rm -f paused
	I1101 11:18:38.582127  118233 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 11:18:38.585969  118233 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-x57vz" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:18:38.915317  118797 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 11:18:38.944434  118797 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1101 11:18:38.993920  118797 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 11:18:39.000090  118797 system_pods.go:59] 8 kube-system pods found
	I1101 11:18:39.000176  118797 system_pods.go:61] "coredns-66bc5c9577-w7cfg" [c0f904f6-44f6-4996-92dc-3fb6a537f96c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 11:18:39.000194  118797 system_pods.go:61] "etcd-embed-certs-571864" [770ba541-6fe5-4e10-84d7-ecf8f6d626f8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 11:18:39.000208  118797 system_pods.go:61] "kube-apiserver-embed-certs-571864" [c9e8f5fd-436e-48aa-b2b2-f9a9564f2279] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 11:18:39.000226  118797 system_pods.go:61] "kube-controller-manager-embed-certs-571864" [2356aebd-c6e3-40e5-a125-b436db7c3a48] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 11:18:39.000244  118797 system_pods.go:61] "kube-proxy-6ddph" [50935e47-809d-4324-8200-148a11692fa8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 11:18:39.000253  118797 system_pods.go:61] "kube-scheduler-embed-certs-571864" [11e5224c-7c54-489f-8396-283ed5892ff9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 11:18:39.000264  118797 system_pods.go:61] "metrics-server-746fcd58dc-8xq94" [319dd232-8ff5-4e8c-bb5a-c165604476c9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 11:18:39.000272  118797 system_pods.go:61] "storage-provisioner" [c5bbb77a-fba5-4683-be08-22021d7600b8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 11:18:39.000285  118797 system_pods.go:74] duration metric: took 6.33173ms to wait for pod list to return data ...
	I1101 11:18:39.000297  118797 node_conditions.go:102] verifying NodePressure condition ...
	I1101 11:18:39.008010  118797 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 11:18:39.008052  118797 node_conditions.go:123] node cpu capacity is 2
	I1101 11:18:39.008069  118797 node_conditions.go:105] duration metric: took 7.765191ms to run NodePressure ...
	I1101 11:18:39.008147  118797 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:18:39.471157  118797 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1101 11:18:39.476477  118797 kubeadm.go:744] kubelet initialised
	I1101 11:18:39.476509  118797 kubeadm.go:745] duration metric: took 5.319514ms waiting for restarted kubelet to initialise ...
	I1101 11:18:39.476551  118797 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 11:18:39.503024  118797 ops.go:34] apiserver oom_adj: -16
	I1101 11:18:39.503053  118797 kubeadm.go:602] duration metric: took 8.518294777s to restartPrimaryControlPlane
	I1101 11:18:39.503067  118797 kubeadm.go:403] duration metric: took 8.593364705s to StartCluster
	I1101 11:18:39.503107  118797 settings.go:142] acquiring lock: {Name:mk26e3d3b2448df59827bb1be60cde1d117dbc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:18:39.503214  118797 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-70113/kubeconfig
	I1101 11:18:39.504891  118797 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/kubeconfig: {Name:mk1fa0677a0758214359bfdd6f326495ee5fd60e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:18:39.505219  118797 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.61.132 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 11:18:39.505306  118797 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 11:18:39.505432  118797 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-571864"
	I1101 11:18:39.505439  118797 addons.go:70] Setting dashboard=true in profile "embed-certs-571864"
	I1101 11:18:39.505456  118797 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-571864"
	W1101 11:18:39.505467  118797 addons.go:248] addon storage-provisioner should already be in state true
	I1101 11:18:39.505468  118797 addons.go:239] Setting addon dashboard=true in "embed-certs-571864"
	W1101 11:18:39.505486  118797 addons.go:248] addon dashboard should already be in state true
	I1101 11:18:39.505499  118797 host.go:66] Checking if "embed-certs-571864" exists ...
	I1101 11:18:39.505509  118797 config.go:182] Loaded profile config "embed-certs-571864": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:18:39.505541  118797 host.go:66] Checking if "embed-certs-571864" exists ...
	I1101 11:18:39.505588  118797 addons.go:70] Setting metrics-server=true in profile "embed-certs-571864"
	I1101 11:18:39.505612  118797 addons.go:239] Setting addon metrics-server=true in "embed-certs-571864"
	I1101 11:18:39.505612  118797 addons.go:70] Setting default-storageclass=true in profile "embed-certs-571864"
	W1101 11:18:39.505622  118797 addons.go:248] addon metrics-server should already be in state true
	I1101 11:18:39.505635  118797 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-571864"
	I1101 11:18:39.505643  118797 host.go:66] Checking if "embed-certs-571864" exists ...
	I1101 11:18:39.507268  118797 out.go:179] * Verifying Kubernetes components...
	I1101 11:18:39.508435  118797 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:18:39.510190  118797 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 11:18:39.510215  118797 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1101 11:18:39.510221  118797 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 11:18:39.510883  118797 addons.go:239] Setting addon default-storageclass=true in "embed-certs-571864"
	W1101 11:18:39.510904  118797 addons.go:248] addon default-storageclass should already be in state true
	I1101 11:18:39.510927  118797 host.go:66] Checking if "embed-certs-571864" exists ...
	I1101 11:18:39.511409  118797 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 11:18:39.511430  118797 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 11:18:39.511416  118797 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 11:18:39.511596  118797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 11:18:39.512695  118797 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 11:18:39.513453  118797 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 11:18:39.513472  118797 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 11:18:39.514193  118797 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 11:18:39.514213  118797 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 11:18:39.516402  118797 main.go:143] libmachine: domain embed-certs-571864 has defined MAC address 52:54:00:d1:0a:19 in network mk-embed-certs-571864
	I1101 11:18:39.516615  118797 main.go:143] libmachine: domain embed-certs-571864 has defined MAC address 52:54:00:d1:0a:19 in network mk-embed-certs-571864
	I1101 11:18:39.517468  118797 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d1:0a:19", ip: ""} in network mk-embed-certs-571864: {Iface:virbr3 ExpiryTime:2025-11-01 12:18:17 +0000 UTC Type:0 Mac:52:54:00:d1:0a:19 Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:embed-certs-571864 Clientid:01:52:54:00:d1:0a:19}
	I1101 11:18:39.517506  118797 main.go:143] libmachine: domain embed-certs-571864 has defined IP address 192.168.61.132 and MAC address 52:54:00:d1:0a:19 in network mk-embed-certs-571864
	I1101 11:18:39.517582  118797 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d1:0a:19", ip: ""} in network mk-embed-certs-571864: {Iface:virbr3 ExpiryTime:2025-11-01 12:18:17 +0000 UTC Type:0 Mac:52:54:00:d1:0a:19 Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:embed-certs-571864 Clientid:01:52:54:00:d1:0a:19}
	I1101 11:18:39.517617  118797 main.go:143] libmachine: domain embed-certs-571864 has defined IP address 192.168.61.132 and MAC address 52:54:00:d1:0a:19 in network mk-embed-certs-571864
	I1101 11:18:39.517713  118797 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/embed-certs-571864/id_rsa Username:docker}
	I1101 11:18:39.518276  118797 main.go:143] libmachine: domain embed-certs-571864 has defined MAC address 52:54:00:d1:0a:19 in network mk-embed-certs-571864
	I1101 11:18:39.518474  118797 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/embed-certs-571864/id_rsa Username:docker}
	I1101 11:18:39.518958  118797 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d1:0a:19", ip: ""} in network mk-embed-certs-571864: {Iface:virbr3 ExpiryTime:2025-11-01 12:18:17 +0000 UTC Type:0 Mac:52:54:00:d1:0a:19 Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:embed-certs-571864 Clientid:01:52:54:00:d1:0a:19}
	I1101 11:18:39.518992  118797 main.go:143] libmachine: domain embed-certs-571864 has defined IP address 192.168.61.132 and MAC address 52:54:00:d1:0a:19 in network mk-embed-certs-571864
	I1101 11:18:39.519188  118797 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/embed-certs-571864/id_rsa Username:docker}
	I1101 11:18:39.519411  118797 main.go:143] libmachine: domain embed-certs-571864 has defined MAC address 52:54:00:d1:0a:19 in network mk-embed-certs-571864
	I1101 11:18:39.519970  118797 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d1:0a:19", ip: ""} in network mk-embed-certs-571864: {Iface:virbr3 ExpiryTime:2025-11-01 12:18:17 +0000 UTC Type:0 Mac:52:54:00:d1:0a:19 Iaid: IPaddr:192.168.61.132 Prefix:24 Hostname:embed-certs-571864 Clientid:01:52:54:00:d1:0a:19}
	I1101 11:18:39.520000  118797 main.go:143] libmachine: domain embed-certs-571864 has defined IP address 192.168.61.132 and MAC address 52:54:00:d1:0a:19 in network mk-embed-certs-571864
	I1101 11:18:39.520229  118797 sshutil.go:53] new ssh client: &{IP:192.168.61.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/embed-certs-571864/id_rsa Username:docker}
	I1101 11:18:39.837212  118797 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:18:39.875438  118797 node_ready.go:35] waiting up to 6m0s for node "embed-certs-571864" to be "Ready" ...
	I1101 11:18:38.330029  119092 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.72.189:22: connect: connection refused
	I1101 11:18:40.157446  118797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 11:18:40.182830  118797 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 11:18:40.182869  118797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1101 11:18:40.207032  118797 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 11:18:40.207065  118797 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 11:18:40.224171  118797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 11:18:40.245899  118797 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 11:18:40.245932  118797 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 11:18:40.287906  118797 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 11:18:40.287943  118797 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 11:18:40.301192  118797 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 11:18:40.301222  118797 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 11:18:40.410546  118797 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 11:18:40.410574  118797 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 11:18:40.426329  118797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 11:18:40.498210  118797 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 11:18:40.498243  118797 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 11:18:40.586946  118797 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 11:18:40.586975  118797 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 11:18:40.684496  118797 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 11:18:40.684524  118797 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 11:18:40.747690  118797 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 11:18:40.747717  118797 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 11:18:40.794606  118797 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 11:18:40.794635  118797 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 11:18:40.856557  118797 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 11:18:40.856583  118797 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 11:18:40.922648  118797 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1101 11:18:41.882343  118797 node_ready.go:57] node "embed-certs-571864" has "Ready":"False" status (will retry)
	I1101 11:18:42.027889  118797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.870394457s)
	I1101 11:18:42.027963  118797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.803748294s)
	I1101 11:18:42.035246  118797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.608877428s)
	I1101 11:18:42.035285  118797 addons.go:480] Verifying addon metrics-server=true in "embed-certs-571864"
	I1101 11:18:42.490212  118797 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.567507869s)
	I1101 11:18:42.492205  118797 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-571864 addons enable metrics-server
	
	I1101 11:18:42.493903  118797 out.go:179] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I1101 11:18:43.418175  119309 start.go:364] duration metric: took 16.758927327s to acquireMachinesLock for "newest-cni-268638"
	I1101 11:18:43.418233  119309 start.go:96] Skipping create...Using existing machine configuration
	I1101 11:18:43.418240  119309 fix.go:54] fixHost starting: 
	I1101 11:18:43.421209  119309 fix.go:112] recreateIfNeeded on newest-cni-268638: state=Stopped err=<nil>
	W1101 11:18:43.421247  119309 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 11:18:40.593855  118233 pod_ready.go:94] pod "coredns-66bc5c9577-x57vz" is "Ready"
	I1101 11:18:40.593893  118233 pod_ready.go:86] duration metric: took 2.007903056s for pod "coredns-66bc5c9577-x57vz" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:18:40.600654  118233 pod_ready.go:83] waiting for pod "etcd-no-preload-294319" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 11:18:42.609951  118233 pod_ready.go:104] pod "etcd-no-preload-294319" is not "Ready", error: <nil>
	I1101 11:18:43.612210  118233 pod_ready.go:94] pod "etcd-no-preload-294319" is "Ready"
	I1101 11:18:43.612245  118233 pod_ready.go:86] duration metric: took 3.011556469s for pod "etcd-no-preload-294319" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:18:43.616419  118233 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-294319" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:18:42.495295  118797 addons.go:515] duration metric: took 2.990004035s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	W1101 11:18:44.380461  118797 node_ready.go:57] node "embed-certs-571864" has "Ready":"False" status (will retry)
	I1101 11:18:41.456668  119092 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 11:18:41.461302  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:41.461846  119092 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3a:46", ip: ""} in network mk-default-k8s-diff-port-287419: {Iface:virbr4 ExpiryTime:2025-11-01 12:18:38 +0000 UTC Type:0 Mac:52:54:00:8d:3a:46 Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:default-k8s-diff-port-287419 Clientid:01:52:54:00:8d:3a:46}
	I1101 11:18:41.461891  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined IP address 192.168.72.189 and MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:41.462316  119092 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/default-k8s-diff-port-287419/config.json ...
	I1101 11:18:41.462586  119092 machine.go:94] provisionDockerMachine start ...
	I1101 11:18:41.465685  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:41.466175  119092 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3a:46", ip: ""} in network mk-default-k8s-diff-port-287419: {Iface:virbr4 ExpiryTime:2025-11-01 12:18:38 +0000 UTC Type:0 Mac:52:54:00:8d:3a:46 Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:default-k8s-diff-port-287419 Clientid:01:52:54:00:8d:3a:46}
	I1101 11:18:41.466210  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined IP address 192.168.72.189 and MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:41.466455  119092 main.go:143] libmachine: Using SSH client type: native
	I1101 11:18:41.466750  119092 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.72.189 22 <nil> <nil>}
	I1101 11:18:41.466770  119092 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 11:18:41.592488  119092 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1101 11:18:41.592519  119092 buildroot.go:166] provisioning hostname "default-k8s-diff-port-287419"
	I1101 11:18:41.596132  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:41.596670  119092 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3a:46", ip: ""} in network mk-default-k8s-diff-port-287419: {Iface:virbr4 ExpiryTime:2025-11-01 12:18:38 +0000 UTC Type:0 Mac:52:54:00:8d:3a:46 Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:default-k8s-diff-port-287419 Clientid:01:52:54:00:8d:3a:46}
	I1101 11:18:41.596707  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined IP address 192.168.72.189 and MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:41.596947  119092 main.go:143] libmachine: Using SSH client type: native
	I1101 11:18:41.597275  119092 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.72.189 22 <nil> <nil>}
	I1101 11:18:41.597300  119092 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-287419 && echo "default-k8s-diff-port-287419" | sudo tee /etc/hostname
	I1101 11:18:41.751054  119092 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-287419
	
	I1101 11:18:41.755077  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:41.755663  119092 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3a:46", ip: ""} in network mk-default-k8s-diff-port-287419: {Iface:virbr4 ExpiryTime:2025-11-01 12:18:38 +0000 UTC Type:0 Mac:52:54:00:8d:3a:46 Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:default-k8s-diff-port-287419 Clientid:01:52:54:00:8d:3a:46}
	I1101 11:18:41.755701  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined IP address 192.168.72.189 and MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:41.755942  119092 main.go:143] libmachine: Using SSH client type: native
	I1101 11:18:41.756227  119092 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.72.189 22 <nil> <nil>}
	I1101 11:18:41.756264  119092 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-287419' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-287419/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-287419' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 11:18:41.894839  119092 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 11:18:41.894879  119092 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21830-70113/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-70113/.minikube}
	I1101 11:18:41.894961  119092 buildroot.go:174] setting up certificates
	I1101 11:18:41.894980  119092 provision.go:84] configureAuth start
	I1101 11:18:41.898652  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:41.899216  119092 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3a:46", ip: ""} in network mk-default-k8s-diff-port-287419: {Iface:virbr4 ExpiryTime:2025-11-01 12:18:38 +0000 UTC Type:0 Mac:52:54:00:8d:3a:46 Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:default-k8s-diff-port-287419 Clientid:01:52:54:00:8d:3a:46}
	I1101 11:18:41.899255  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined IP address 192.168.72.189 and MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:41.902742  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:41.903260  119092 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3a:46", ip: ""} in network mk-default-k8s-diff-port-287419: {Iface:virbr4 ExpiryTime:2025-11-01 12:18:38 +0000 UTC Type:0 Mac:52:54:00:8d:3a:46 Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:default-k8s-diff-port-287419 Clientid:01:52:54:00:8d:3a:46}
	I1101 11:18:41.903309  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined IP address 192.168.72.189 and MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:41.903463  119092 provision.go:143] copyHostCerts
	I1101 11:18:41.903526  119092 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-70113/.minikube/ca.pem, removing ...
	I1101 11:18:41.903562  119092 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-70113/.minikube/ca.pem
	I1101 11:18:41.903662  119092 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-70113/.minikube/ca.pem (1082 bytes)
	I1101 11:18:41.903798  119092 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-70113/.minikube/cert.pem, removing ...
	I1101 11:18:41.903816  119092 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-70113/.minikube/cert.pem
	I1101 11:18:41.903869  119092 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-70113/.minikube/cert.pem (1123 bytes)
	I1101 11:18:41.903964  119092 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-70113/.minikube/key.pem, removing ...
	I1101 11:18:41.903978  119092 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-70113/.minikube/key.pem
	I1101 11:18:41.904020  119092 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-70113/.minikube/key.pem (1675 bytes)
	I1101 11:18:41.904117  119092 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-70113/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-287419 san=[127.0.0.1 192.168.72.189 default-k8s-diff-port-287419 localhost minikube]
	I1101 11:18:42.668830  119092 provision.go:177] copyRemoteCerts
	I1101 11:18:42.668897  119092 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 11:18:42.672740  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:42.673289  119092 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3a:46", ip: ""} in network mk-default-k8s-diff-port-287419: {Iface:virbr4 ExpiryTime:2025-11-01 12:18:38 +0000 UTC Type:0 Mac:52:54:00:8d:3a:46 Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:default-k8s-diff-port-287419 Clientid:01:52:54:00:8d:3a:46}
	I1101 11:18:42.673322  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined IP address 192.168.72.189 and MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:42.673497  119092 sshutil.go:53] new ssh client: &{IP:192.168.72.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/default-k8s-diff-port-287419/id_rsa Username:docker}
	I1101 11:18:42.768628  119092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 11:18:42.804283  119092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1101 11:18:42.841345  119092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 11:18:42.877246  119092 provision.go:87] duration metric: took 982.248219ms to configureAuth
	I1101 11:18:42.877277  119092 buildroot.go:189] setting minikube options for container-runtime
	I1101 11:18:42.877486  119092 config.go:182] Loaded profile config "default-k8s-diff-port-287419": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:18:42.881112  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:42.881569  119092 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3a:46", ip: ""} in network mk-default-k8s-diff-port-287419: {Iface:virbr4 ExpiryTime:2025-11-01 12:18:38 +0000 UTC Type:0 Mac:52:54:00:8d:3a:46 Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:default-k8s-diff-port-287419 Clientid:01:52:54:00:8d:3a:46}
	I1101 11:18:42.881597  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined IP address 192.168.72.189 and MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:42.881942  119092 main.go:143] libmachine: Using SSH client type: native
	I1101 11:18:42.882150  119092 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.72.189 22 <nil> <nil>}
	I1101 11:18:42.882164  119092 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 11:18:43.154660  119092 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 11:18:43.154696  119092 machine.go:97] duration metric: took 1.692092034s to provisionDockerMachine
	I1101 11:18:43.154717  119092 start.go:293] postStartSetup for "default-k8s-diff-port-287419" (driver="kvm2")
	I1101 11:18:43.154737  119092 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 11:18:43.154856  119092 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 11:18:43.158201  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:43.158765  119092 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3a:46", ip: ""} in network mk-default-k8s-diff-port-287419: {Iface:virbr4 ExpiryTime:2025-11-01 12:18:38 +0000 UTC Type:0 Mac:52:54:00:8d:3a:46 Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:default-k8s-diff-port-287419 Clientid:01:52:54:00:8d:3a:46}
	I1101 11:18:43.158814  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined IP address 192.168.72.189 and MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:43.159025  119092 sshutil.go:53] new ssh client: &{IP:192.168.72.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/default-k8s-diff-port-287419/id_rsa Username:docker}
	I1101 11:18:43.245201  119092 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 11:18:43.251371  119092 info.go:137] Remote host: Buildroot 2025.02
	I1101 11:18:43.251408  119092 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-70113/.minikube/addons for local assets ...
	I1101 11:18:43.251487  119092 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-70113/.minikube/files for local assets ...
	I1101 11:18:43.251587  119092 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-70113/.minikube/files/etc/ssl/certs/739982.pem -> 739982.pem in /etc/ssl/certs
	I1101 11:18:43.251681  119092 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 11:18:43.264422  119092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/files/etc/ssl/certs/739982.pem --> /etc/ssl/certs/739982.pem (1708 bytes)
	I1101 11:18:43.299403  119092 start.go:296] duration metric: took 144.66394ms for postStartSetup
	I1101 11:18:43.299451  119092 fix.go:56] duration metric: took 19.996599515s for fixHost
	I1101 11:18:43.302625  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:43.303139  119092 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3a:46", ip: ""} in network mk-default-k8s-diff-port-287419: {Iface:virbr4 ExpiryTime:2025-11-01 12:18:38 +0000 UTC Type:0 Mac:52:54:00:8d:3a:46 Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:default-k8s-diff-port-287419 Clientid:01:52:54:00:8d:3a:46}
	I1101 11:18:43.303168  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined IP address 192.168.72.189 and MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:43.303320  119092 main.go:143] libmachine: Using SSH client type: native
	I1101 11:18:43.303555  119092 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.72.189 22 <nil> <nil>}
	I1101 11:18:43.303566  119092 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1101 11:18:43.418003  119092 main.go:143] libmachine: SSH cmd err, output: <nil>: 1761995923.381565826
	
	I1101 11:18:43.418026  119092 fix.go:216] guest clock: 1761995923.381565826
	I1101 11:18:43.418038  119092 fix.go:229] Guest: 2025-11-01 11:18:43.381565826 +0000 UTC Remote: 2025-11-01 11:18:43.299455347 +0000 UTC m=+38.090698708 (delta=82.110479ms)
	I1101 11:18:43.418081  119092 fix.go:200] guest clock delta is within tolerance: 82.110479ms
	I1101 11:18:43.418095  119092 start.go:83] releasing machines lock for "default-k8s-diff-port-287419", held for 20.115269056s
	I1101 11:18:43.422245  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:43.422873  119092 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3a:46", ip: ""} in network mk-default-k8s-diff-port-287419: {Iface:virbr4 ExpiryTime:2025-11-01 12:18:38 +0000 UTC Type:0 Mac:52:54:00:8d:3a:46 Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:default-k8s-diff-port-287419 Clientid:01:52:54:00:8d:3a:46}
	I1101 11:18:43.422922  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined IP address 192.168.72.189 and MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:43.423656  119092 ssh_runner.go:195] Run: cat /version.json
	I1101 11:18:43.423745  119092 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 11:18:43.427633  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:43.428098  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:43.428788  119092 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3a:46", ip: ""} in network mk-default-k8s-diff-port-287419: {Iface:virbr4 ExpiryTime:2025-11-01 12:18:38 +0000 UTC Type:0 Mac:52:54:00:8d:3a:46 Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:default-k8s-diff-port-287419 Clientid:01:52:54:00:8d:3a:46}
	I1101 11:18:43.428841  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined IP address 192.168.72.189 and MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:43.428920  119092 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3a:46", ip: ""} in network mk-default-k8s-diff-port-287419: {Iface:virbr4 ExpiryTime:2025-11-01 12:18:38 +0000 UTC Type:0 Mac:52:54:00:8d:3a:46 Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:default-k8s-diff-port-287419 Clientid:01:52:54:00:8d:3a:46}
	I1101 11:18:43.428967  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined IP address 192.168.72.189 and MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:43.429056  119092 sshutil.go:53] new ssh client: &{IP:192.168.72.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/default-k8s-diff-port-287419/id_rsa Username:docker}
	I1101 11:18:43.429264  119092 sshutil.go:53] new ssh client: &{IP:192.168.72.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/default-k8s-diff-port-287419/id_rsa Username:docker}
	I1101 11:18:43.534909  119092 ssh_runner.go:195] Run: systemctl --version
	I1101 11:18:43.542779  119092 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 11:18:43.705592  119092 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 11:18:43.717179  119092 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 11:18:43.717261  119092 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 11:18:43.749011  119092 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 11:18:43.749051  119092 start.go:496] detecting cgroup driver to use...
	I1101 11:18:43.749137  119092 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 11:18:43.772342  119092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 11:18:43.791797  119092 docker.go:218] disabling cri-docker service (if available) ...
	I1101 11:18:43.791870  119092 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 11:18:43.811527  119092 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 11:18:43.834526  119092 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 11:18:44.023287  119092 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 11:18:44.275548  119092 docker.go:234] disabling docker service ...
	I1101 11:18:44.275630  119092 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 11:18:44.299729  119092 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 11:18:44.322944  119092 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 11:18:44.580019  119092 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 11:18:44.751299  119092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 11:18:44.778037  119092 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 11:18:44.813744  119092 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 11:18:44.813818  119092 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:18:44.832513  119092 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 11:18:44.832603  119092 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:18:44.848201  119092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:18:44.863420  119092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:18:44.884202  119092 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 11:18:44.900851  119092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:18:44.918106  119092 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:18:44.949127  119092 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:18:44.965351  119092 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 11:18:44.978495  119092 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 11:18:44.978603  119092 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 11:18:45.005439  119092 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 11:18:45.023377  119092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:18:45.237718  119092 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 11:18:45.492133  119092 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 11:18:45.492225  119092 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 11:18:45.499840  119092 start.go:564] Will wait 60s for crictl version
	I1101 11:18:45.499923  119092 ssh_runner.go:195] Run: which crictl
	I1101 11:18:45.505873  119092 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 11:18:45.562824  119092 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1101 11:18:45.562918  119092 ssh_runner.go:195] Run: crio --version
	I1101 11:18:45.604231  119092 ssh_runner.go:195] Run: crio --version
	I1101 11:18:45.652733  119092 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1101 11:18:45.638411  118233 pod_ready.go:94] pod "kube-apiserver-no-preload-294319" is "Ready"
	I1101 11:18:45.638450  118233 pod_ready.go:86] duration metric: took 2.022002011s for pod "kube-apiserver-no-preload-294319" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:18:45.644242  118233 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-294319" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:18:45.651083  118233 pod_ready.go:94] pod "kube-controller-manager-no-preload-294319" is "Ready"
	I1101 11:18:45.651119  118233 pod_ready.go:86] duration metric: took 6.837894ms for pod "kube-controller-manager-no-preload-294319" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:18:45.656820  118233 pod_ready.go:83] waiting for pod "kube-proxy-2qfgw" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:18:45.676011  118233 pod_ready.go:94] pod "kube-proxy-2qfgw" is "Ready"
	I1101 11:18:45.676047  118233 pod_ready.go:86] duration metric: took 19.197486ms for pod "kube-proxy-2qfgw" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:18:45.680557  118233 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-294319" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:18:46.006840  118233 pod_ready.go:94] pod "kube-scheduler-no-preload-294319" is "Ready"
	I1101 11:18:46.006874  118233 pod_ready.go:86] duration metric: took 326.284636ms for pod "kube-scheduler-no-preload-294319" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:18:46.006890  118233 pod_ready.go:40] duration metric: took 7.424729376s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 11:18:46.076140  118233 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 11:18:46.077877  118233 out.go:179] * Done! kubectl is now configured to use "no-preload-294319" cluster and "default" namespace by default
	I1101 11:18:43.422982  119309 out.go:252] * Restarting existing kvm2 VM for "newest-cni-268638" ...
	I1101 11:18:43.423037  119309 main.go:143] libmachine: starting domain...
	I1101 11:18:43.423053  119309 main.go:143] libmachine: ensuring networks are active...
	I1101 11:18:43.424417  119309 main.go:143] libmachine: Ensuring network default is active
	I1101 11:18:43.425173  119309 main.go:143] libmachine: Ensuring network mk-newest-cni-268638 is active
	I1101 11:18:43.426407  119309 main.go:143] libmachine: getting domain XML...
	I1101 11:18:43.428447  119309 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>newest-cni-268638</name>
	  <uuid>40498d54-a520-4b96-9f84-14615a0fb7fb</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21830-70113/.minikube/machines/newest-cni-268638/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21830-70113/.minikube/machines/newest-cni-268638/newest-cni-268638.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:62:b8:3b'/>
	      <source network='mk-newest-cni-268638'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:0b:98:32'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1101 11:18:45.136752  119309 main.go:143] libmachine: waiting for domain to start...
	I1101 11:18:45.138581  119309 main.go:143] libmachine: domain is now running
	I1101 11:18:45.138602  119309 main.go:143] libmachine: waiting for IP...
	I1101 11:18:45.139581  119309 main.go:143] libmachine: domain newest-cni-268638 has defined MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:18:45.140387  119309 main.go:143] libmachine: domain newest-cni-268638 has current primary IP address 192.168.83.241 and MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:18:45.140405  119309 main.go:143] libmachine: found domain IP: 192.168.83.241
	I1101 11:18:45.140413  119309 main.go:143] libmachine: reserving static IP address...
	I1101 11:18:45.140922  119309 main.go:143] libmachine: found host DHCP lease matching {name: "newest-cni-268638", mac: "52:54:00:62:b8:3b", ip: "192.168.83.241"} in network mk-newest-cni-268638: {Iface:virbr5 ExpiryTime:2025-11-01 12:17:40 +0000 UTC Type:0 Mac:52:54:00:62:b8:3b Iaid: IPaddr:192.168.83.241 Prefix:24 Hostname:newest-cni-268638 Clientid:01:52:54:00:62:b8:3b}
	I1101 11:18:45.140956  119309 main.go:143] libmachine: skip adding static IP to network mk-newest-cni-268638 - found existing host DHCP lease matching {name: "newest-cni-268638", mac: "52:54:00:62:b8:3b", ip: "192.168.83.241"}
	I1101 11:18:45.140967  119309 main.go:143] libmachine: reserved static IP address 192.168.83.241 for domain newest-cni-268638
	I1101 11:18:45.140974  119309 main.go:143] libmachine: waiting for SSH...
	I1101 11:18:45.140981  119309 main.go:143] libmachine: Getting to WaitForSSH function...
	I1101 11:18:45.144764  119309 main.go:143] libmachine: domain newest-cni-268638 has defined MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:18:45.145315  119309 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:62:b8:3b", ip: ""} in network mk-newest-cni-268638: {Iface:virbr5 ExpiryTime:2025-11-01 12:17:40 +0000 UTC Type:0 Mac:52:54:00:62:b8:3b Iaid: IPaddr:192.168.83.241 Prefix:24 Hostname:newest-cni-268638 Clientid:01:52:54:00:62:b8:3b}
	I1101 11:18:45.145357  119309 main.go:143] libmachine: domain newest-cni-268638 has defined IP address 192.168.83.241 and MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:18:45.145605  119309 main.go:143] libmachine: Using SSH client type: native
	I1101 11:18:45.145929  119309 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.83.241 22 <nil> <nil>}
	I1101 11:18:45.145949  119309 main.go:143] libmachine: About to run SSH command:
	exit 0
	W1101 11:18:46.385220  118797 node_ready.go:57] node "embed-certs-571864" has "Ready":"False" status (will retry)
	I1101 11:18:48.384345  118797 node_ready.go:49] node "embed-certs-571864" is "Ready"
	I1101 11:18:48.384409  118797 node_ready.go:38] duration metric: took 8.508911909s for node "embed-certs-571864" to be "Ready" ...
	I1101 11:18:48.384432  118797 api_server.go:52] waiting for apiserver process to appear ...
	I1101 11:18:48.384515  118797 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:18:48.425220  118797 api_server.go:72] duration metric: took 8.919952173s to wait for apiserver process to appear ...
	I1101 11:18:48.425259  118797 api_server.go:88] waiting for apiserver healthz status ...
	I1101 11:18:48.425286  118797 api_server.go:253] Checking apiserver healthz at https://192.168.61.132:8443/healthz ...
	I1101 11:18:48.434060  118797 api_server.go:279] https://192.168.61.132:8443/healthz returned 200:
	ok
	I1101 11:18:48.435647  118797 api_server.go:141] control plane version: v1.34.1
	I1101 11:18:48.435681  118797 api_server.go:131] duration metric: took 10.412081ms to wait for apiserver health ...
	I1101 11:18:48.435693  118797 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 11:18:48.440556  118797 system_pods.go:59] 8 kube-system pods found
	I1101 11:18:48.440590  118797 system_pods.go:61] "coredns-66bc5c9577-w7cfg" [c0f904f6-44f6-4996-92dc-3fb6a537f96c] Running
	I1101 11:18:48.440609  118797 system_pods.go:61] "etcd-embed-certs-571864" [770ba541-6fe5-4e10-84d7-ecf8f6d626f8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 11:18:48.440615  118797 system_pods.go:61] "kube-apiserver-embed-certs-571864" [c9e8f5fd-436e-48aa-b2b2-f9a9564f2279] Running
	I1101 11:18:48.440622  118797 system_pods.go:61] "kube-controller-manager-embed-certs-571864" [2356aebd-c6e3-40e5-a125-b436db7c3a48] Running
	I1101 11:18:48.440627  118797 system_pods.go:61] "kube-proxy-6ddph" [50935e47-809d-4324-8200-148a11692fa8] Running
	I1101 11:18:48.440634  118797 system_pods.go:61] "kube-scheduler-embed-certs-571864" [11e5224c-7c54-489f-8396-283ed5892ff9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 11:18:48.440642  118797 system_pods.go:61] "metrics-server-746fcd58dc-8xq94" [319dd232-8ff5-4e8c-bb5a-c165604476c9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 11:18:48.440657  118797 system_pods.go:61] "storage-provisioner" [c5bbb77a-fba5-4683-be08-22021d7600b8] Running
	I1101 11:18:48.440673  118797 system_pods.go:74] duration metric: took 4.964747ms to wait for pod list to return data ...
	I1101 11:18:48.440683  118797 default_sa.go:34] waiting for default service account to be created ...
	I1101 11:18:48.445681  118797 default_sa.go:45] found service account: "default"
	I1101 11:18:48.445764  118797 default_sa.go:55] duration metric: took 5.073234ms for default service account to be created ...
	I1101 11:18:48.445778  118797 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 11:18:48.451443  118797 system_pods.go:86] 8 kube-system pods found
	I1101 11:18:48.451742  118797 system_pods.go:89] "coredns-66bc5c9577-w7cfg" [c0f904f6-44f6-4996-92dc-3fb6a537f96c] Running
	I1101 11:18:48.451766  118797 system_pods.go:89] "etcd-embed-certs-571864" [770ba541-6fe5-4e10-84d7-ecf8f6d626f8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 11:18:48.451774  118797 system_pods.go:89] "kube-apiserver-embed-certs-571864" [c9e8f5fd-436e-48aa-b2b2-f9a9564f2279] Running
	I1101 11:18:48.451781  118797 system_pods.go:89] "kube-controller-manager-embed-certs-571864" [2356aebd-c6e3-40e5-a125-b436db7c3a48] Running
	I1101 11:18:48.451787  118797 system_pods.go:89] "kube-proxy-6ddph" [50935e47-809d-4324-8200-148a11692fa8] Running
	I1101 11:18:48.451798  118797 system_pods.go:89] "kube-scheduler-embed-certs-571864" [11e5224c-7c54-489f-8396-283ed5892ff9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 11:18:48.451806  118797 system_pods.go:89] "metrics-server-746fcd58dc-8xq94" [319dd232-8ff5-4e8c-bb5a-c165604476c9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 11:18:48.451811  118797 system_pods.go:89] "storage-provisioner" [c5bbb77a-fba5-4683-be08-22021d7600b8] Running
	I1101 11:18:48.451823  118797 system_pods.go:126] duration metric: took 6.036564ms to wait for k8s-apps to be running ...
	I1101 11:18:48.451832  118797 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 11:18:48.451887  118797 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:18:48.531966  118797 system_svc.go:56] duration metric: took 80.113291ms WaitForService to wait for kubelet
	I1101 11:18:48.532000  118797 kubeadm.go:587] duration metric: took 9.02673999s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 11:18:48.532023  118797 node_conditions.go:102] verifying NodePressure condition ...
	I1101 11:18:48.540985  118797 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 11:18:48.541046  118797 node_conditions.go:123] node cpu capacity is 2
	I1101 11:18:48.541060  118797 node_conditions.go:105] duration metric: took 9.030982ms to run NodePressure ...
	I1101 11:18:48.541076  118797 start.go:242] waiting for startup goroutines ...
	I1101 11:18:48.541086  118797 start.go:247] waiting for cluster config update ...
	I1101 11:18:48.541166  118797 start.go:256] writing updated cluster config ...
	I1101 11:18:48.541638  118797 ssh_runner.go:195] Run: rm -f paused
	I1101 11:18:48.560819  118797 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 11:18:48.573978  118797 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w7cfg" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:18:48.590064  118797 pod_ready.go:94] pod "coredns-66bc5c9577-w7cfg" is "Ready"
	I1101 11:18:48.590101  118797 pod_ready.go:86] duration metric: took 16.092377ms for pod "coredns-66bc5c9577-w7cfg" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:18:48.597991  118797 pod_ready.go:83] waiting for pod "etcd-embed-certs-571864" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:18:49.611561  118797 pod_ready.go:94] pod "etcd-embed-certs-571864" is "Ready"
	I1101 11:18:49.611605  118797 pod_ready.go:86] duration metric: took 1.013583664s for pod "etcd-embed-certs-571864" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:18:49.619039  118797 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-571864" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:18:49.642201  118797 pod_ready.go:94] pod "kube-apiserver-embed-certs-571864" is "Ready"
	I1101 11:18:49.642241  118797 pod_ready.go:86] duration metric: took 23.165447ms for pod "kube-apiserver-embed-certs-571864" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:18:49.646543  118797 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-571864" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:18:49.767271  118797 pod_ready.go:94] pod "kube-controller-manager-embed-certs-571864" is "Ready"
	I1101 11:18:49.767304  118797 pod_ready.go:86] duration metric: took 120.732816ms for pod "kube-controller-manager-embed-certs-571864" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:18:49.968010  118797 pod_ready.go:83] waiting for pod "kube-proxy-6ddph" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:18:45.658762  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:45.659437  119092 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3a:46", ip: ""} in network mk-default-k8s-diff-port-287419: {Iface:virbr4 ExpiryTime:2025-11-01 12:18:38 +0000 UTC Type:0 Mac:52:54:00:8d:3a:46 Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:default-k8s-diff-port-287419 Clientid:01:52:54:00:8d:3a:46}
	I1101 11:18:45.659479  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined IP address 192.168.72.189 and MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:18:45.659778  119092 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1101 11:18:45.666180  119092 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 11:18:45.685523  119092 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-287419 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.34.1 ClusterName:default-k8s-diff-port-287419 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.189 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Netwo
rk: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 11:18:45.685726  119092 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 11:18:45.685806  119092 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 11:18:45.739874  119092 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1101 11:18:45.739973  119092 ssh_runner.go:195] Run: which lz4
	I1101 11:18:45.745645  119092 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1101 11:18:45.753480  119092 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 11:18:45.753514  119092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1101 11:18:47.890147  119092 crio.go:462] duration metric: took 2.144617755s to copy over tarball
	I1101 11:18:47.890300  119092 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 11:18:50.155390  119092 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.265050771s)
	I1101 11:18:50.155433  119092 crio.go:469] duration metric: took 2.265246579s to extract the tarball
	I1101 11:18:50.155443  119092 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 11:18:50.204230  119092 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 11:18:50.258185  119092 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 11:18:50.258221  119092 cache_images.go:86] Images are preloaded, skipping loading
	I1101 11:18:50.258234  119092 kubeadm.go:935] updating node { 192.168.72.189 8444 v1.34.1 crio true true} ...
	I1101 11:18:50.258391  119092 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-287419 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.189
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-287419 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 11:18:50.258493  119092 ssh_runner.go:195] Run: crio config
	I1101 11:18:48.248827  119309 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.83.241:22: connect: no route to host
	I1101 11:18:50.367021  118797 pod_ready.go:94] pod "kube-proxy-6ddph" is "Ready"
	I1101 11:18:50.367054  118797 pod_ready.go:86] duration metric: took 399.012072ms for pod "kube-proxy-6ddph" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:18:50.567216  118797 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-571864" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:18:52.221415  118797 pod_ready.go:94] pod "kube-scheduler-embed-certs-571864" is "Ready"
	I1101 11:18:52.221448  118797 pod_ready.go:86] duration metric: took 1.654197202s for pod "kube-scheduler-embed-certs-571864" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:18:52.221463  118797 pod_ready.go:40] duration metric: took 3.660600674s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 11:18:52.276290  118797 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 11:18:52.296690  118797 out.go:179] * Done! kubectl is now configured to use "embed-certs-571864" cluster and "default" namespace by default
	I1101 11:18:50.323593  119092 cni.go:84] Creating CNI manager for ""
	I1101 11:18:50.323631  119092 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 11:18:50.323663  119092 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 11:18:50.323698  119092 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.189 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-287419 NodeName:default-k8s-diff-port-287419 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.189"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.189 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 11:18:50.323866  119092 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.189
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-287419"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.189"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.189"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 11:18:50.323933  119092 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 11:18:50.338036  119092 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 11:18:50.338130  119092 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 11:18:50.351395  119092 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1101 11:18:50.378165  119092 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 11:18:50.404831  119092 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2231 bytes)
	I1101 11:18:50.432730  119092 ssh_runner.go:195] Run: grep 192.168.72.189	control-plane.minikube.internal$ /etc/hosts
	I1101 11:18:50.438076  119092 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.189	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 11:18:50.458245  119092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:18:50.653121  119092 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:18:50.682404  119092 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/default-k8s-diff-port-287419 for IP: 192.168.72.189
	I1101 11:18:50.682436  119092 certs.go:195] generating shared ca certs ...
	I1101 11:18:50.682464  119092 certs.go:227] acquiring lock for ca certs: {Name:mk20731b316fbc22c351241cefc40924880eeba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:18:50.682663  119092 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-70113/.minikube/ca.key
	I1101 11:18:50.682720  119092 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.key
	I1101 11:18:50.682733  119092 certs.go:257] generating profile certs ...
	I1101 11:18:50.682880  119092 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/default-k8s-diff-port-287419/client.key
	I1101 11:18:50.682981  119092 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/default-k8s-diff-port-287419/apiserver.key.f27f6a30
	I1101 11:18:50.683040  119092 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/default-k8s-diff-port-287419/proxy-client.key
	I1101 11:18:50.683213  119092 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/73998.pem (1338 bytes)
	W1101 11:18:50.683253  119092 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-70113/.minikube/certs/73998_empty.pem, impossibly tiny 0 bytes
	I1101 11:18:50.683263  119092 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 11:18:50.683293  119092 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem (1082 bytes)
	I1101 11:18:50.683319  119092 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/cert.pem (1123 bytes)
	I1101 11:18:50.683346  119092 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/key.pem (1675 bytes)
	I1101 11:18:50.683397  119092 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/files/etc/ssl/certs/739982.pem (1708 bytes)
	I1101 11:18:50.684304  119092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 11:18:50.770464  119092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 11:18:50.826170  119092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 11:18:50.865353  119092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 11:18:50.903837  119092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/default-k8s-diff-port-287419/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1101 11:18:50.939547  119092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/default-k8s-diff-port-287419/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 11:18:50.979273  119092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/default-k8s-diff-port-287419/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 11:18:51.014443  119092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/default-k8s-diff-port-287419/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 11:18:51.052333  119092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/files/etc/ssl/certs/739982.pem --> /usr/share/ca-certificates/739982.pem (1708 bytes)
	I1101 11:18:51.098653  119092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 11:18:51.142604  119092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/certs/73998.pem --> /usr/share/ca-certificates/73998.pem (1338 bytes)
	I1101 11:18:51.180582  119092 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 11:18:51.208143  119092 ssh_runner.go:195] Run: openssl version
	I1101 11:18:51.215212  119092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/73998.pem && ln -fs /usr/share/ca-certificates/73998.pem /etc/ssl/certs/73998.pem"
	I1101 11:18:51.231219  119092 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/73998.pem
	I1101 11:18:51.237181  119092 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:03 /usr/share/ca-certificates/73998.pem
	I1101 11:18:51.237262  119092 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/73998.pem
	I1101 11:18:51.245877  119092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/73998.pem /etc/ssl/certs/51391683.0"
	I1101 11:18:51.261634  119092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/739982.pem && ln -fs /usr/share/ca-certificates/739982.pem /etc/ssl/certs/739982.pem"
	I1101 11:18:51.276783  119092 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/739982.pem
	I1101 11:18:51.284238  119092 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:03 /usr/share/ca-certificates/739982.pem
	I1101 11:18:51.284312  119092 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/739982.pem
	I1101 11:18:51.295293  119092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/739982.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 11:18:51.311150  119092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 11:18:51.328773  119092 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:18:51.336579  119092 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:50 /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:18:51.336652  119092 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:18:51.344957  119092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 11:18:51.359860  119092 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 11:18:51.366325  119092 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 11:18:51.375090  119092 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 11:18:51.384626  119092 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 11:18:51.392868  119092 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 11:18:51.402460  119092 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 11:18:51.411353  119092 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 11:18:51.420451  119092 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-287419 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.34.1 ClusterName:default-k8s-diff-port-287419 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.189 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:
Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:18:51.420580  119092 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 11:18:51.420645  119092 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 11:18:51.498981  119092 cri.go:89] found id: ""
	I1101 11:18:51.499063  119092 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 11:18:51.521648  119092 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 11:18:51.521678  119092 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 11:18:51.521743  119092 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 11:18:51.545925  119092 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 11:18:51.547118  119092 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-287419" does not appear in /home/jenkins/minikube-integration/21830-70113/kubeconfig
	I1101 11:18:51.547926  119092 kubeconfig.go:62] /home/jenkins/minikube-integration/21830-70113/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-287419" cluster setting kubeconfig missing "default-k8s-diff-port-287419" context setting]
	I1101 11:18:51.549046  119092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/kubeconfig: {Name:mk1fa0677a0758214359bfdd6f326495ee5fd60e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:18:51.551181  119092 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 11:18:51.569605  119092 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.72.189
	I1101 11:18:51.569656  119092 kubeadm.go:1161] stopping kube-system containers ...
	I1101 11:18:51.569674  119092 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1101 11:18:51.569742  119092 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 11:18:51.615711  119092 cri.go:89] found id: ""
	I1101 11:18:51.615786  119092 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 11:18:51.637779  119092 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 11:18:51.651841  119092 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 11:18:51.651866  119092 kubeadm.go:158] found existing configuration files:
	
	I1101 11:18:51.651925  119092 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1101 11:18:51.664205  119092 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 11:18:51.664267  119092 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 11:18:51.677582  119092 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1101 11:18:51.692735  119092 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 11:18:51.692837  119092 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 11:18:51.707180  119092 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1101 11:18:51.719498  119092 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 11:18:51.719581  119092 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 11:18:51.732153  119092 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1101 11:18:51.744913  119092 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 11:18:51.744989  119092 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 11:18:51.759367  119092 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 11:18:51.774247  119092 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:18:51.853357  119092 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:18:53.362889  119092 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.509481634s)
	I1101 11:18:53.362994  119092 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:18:53.731950  119092 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:18:53.866848  119092 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:18:54.001010  119092 api_server.go:52] waiting for apiserver process to appear ...
	I1101 11:18:54.001129  119092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:18:54.501589  119092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:18:55.001870  119092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:18:54.329849  119309 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.83.241:22: connect: no route to host
	I1101 11:18:55.501249  119092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:18:55.566894  119092 api_server.go:72] duration metric: took 1.565913808s to wait for apiserver process to appear ...
	I1101 11:18:55.566931  119092 api_server.go:88] waiting for apiserver healthz status ...
	I1101 11:18:55.566973  119092 api_server.go:253] Checking apiserver healthz at https://192.168.72.189:8444/healthz ...
	I1101 11:18:55.568053  119092 api_server.go:269] stopped: https://192.168.72.189:8444/healthz: Get "https://192.168.72.189:8444/healthz": dial tcp 192.168.72.189:8444: connect: connection refused
	I1101 11:18:56.067770  119092 api_server.go:253] Checking apiserver healthz at https://192.168.72.189:8444/healthz ...
	I1101 11:18:59.493243  119092 api_server.go:279] https://192.168.72.189:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 11:18:59.493281  119092 api_server.go:103] status: https://192.168.72.189:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 11:18:59.493300  119092 api_server.go:253] Checking apiserver healthz at https://192.168.72.189:8444/healthz ...
	I1101 11:18:59.546941  119092 api_server.go:279] https://192.168.72.189:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 11:18:59.546974  119092 api_server.go:103] status: https://192.168.72.189:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 11:18:59.567134  119092 api_server.go:253] Checking apiserver healthz at https://192.168.72.189:8444/healthz ...
	I1101 11:18:59.582757  119092 api_server.go:279] https://192.168.72.189:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:18:59.582800  119092 api_server.go:103] status: https://192.168.72.189:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:19:00.067142  119092 api_server.go:253] Checking apiserver healthz at https://192.168.72.189:8444/healthz ...
	I1101 11:19:00.077874  119092 api_server.go:279] https://192.168.72.189:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:19:00.077907  119092 api_server.go:103] status: https://192.168.72.189:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:18:59.386401  119309 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.83.241:22: connect: connection refused
	I1101 11:19:00.570635  119092 api_server.go:253] Checking apiserver healthz at https://192.168.72.189:8444/healthz ...
	I1101 11:19:00.599718  119092 api_server.go:279] https://192.168.72.189:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:19:00.599754  119092 api_server.go:103] status: https://192.168.72.189:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:19:01.067678  119092 api_server.go:253] Checking apiserver healthz at https://192.168.72.189:8444/healthz ...
	I1101 11:19:01.083030  119092 api_server.go:279] https://192.168.72.189:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:19:01.083068  119092 api_server.go:103] status: https://192.168.72.189:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:19:01.567897  119092 api_server.go:253] Checking apiserver healthz at https://192.168.72.189:8444/healthz ...
	I1101 11:19:01.580905  119092 api_server.go:279] https://192.168.72.189:8444/healthz returned 200:
	ok
	I1101 11:19:01.600341  119092 api_server.go:141] control plane version: v1.34.1
	I1101 11:19:01.600375  119092 api_server.go:131] duration metric: took 6.033436041s to wait for apiserver health ...
	I1101 11:19:01.600388  119092 cni.go:84] Creating CNI manager for ""
	I1101 11:19:01.600396  119092 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 11:19:01.602421  119092 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 11:19:01.603598  119092 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 11:19:01.627377  119092 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1101 11:19:01.669719  119092 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 11:19:01.694977  119092 system_pods.go:59] 8 kube-system pods found
	I1101 11:19:01.695033  119092 system_pods.go:61] "coredns-66bc5c9577-drlhc" [2fe001ab-c59d-4a12-9897-d7d2869a1af8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 11:19:01.695054  119092 system_pods.go:61] "etcd-default-k8s-diff-port-287419" [67bd5955-ba6e-4d48-a952-857e719ddcb6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 11:19:01.695067  119092 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-287419" [c8154e49-5eed-4825-b594-e588075878ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 11:19:01.695078  119092 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-287419" [02a42753-0962-4d25-b898-43759f929c36] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 11:19:01.695091  119092 system_pods.go:61] "kube-proxy-lhjdx" [63b7c2eb-cdb2-4318-bef4-e95e3e478fb6] Running
	I1101 11:19:01.695100  119092 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-287419" [49ee2304-24ca-4a26-8b1c-9f59d8281dea] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 11:19:01.695112  119092 system_pods.go:61] "metrics-server-746fcd58dc-zmbnr" [ffa3dd51-bf02-44da-800d-f8d714bc1b36] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 11:19:01.695120  119092 system_pods.go:61] "storage-provisioner" [4a047ac3-d0c4-448e-8066-5a3ccd78fcc1] Running
	I1101 11:19:01.695129  119092 system_pods.go:74] duration metric: took 25.383583ms to wait for pod list to return data ...
	I1101 11:19:01.695141  119092 node_conditions.go:102] verifying NodePressure condition ...
	I1101 11:19:01.700190  119092 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 11:19:01.700224  119092 node_conditions.go:123] node cpu capacity is 2
	I1101 11:19:01.700238  119092 node_conditions.go:105] duration metric: took 5.091601ms to run NodePressure ...
	I1101 11:19:01.700308  119092 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:19:02.239500  119092 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1101 11:19:02.244469  119092 kubeadm.go:744] kubelet initialised
	I1101 11:19:02.244497  119092 kubeadm.go:745] duration metric: took 4.968663ms waiting for restarted kubelet to initialise ...
	I1101 11:19:02.244518  119092 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 11:19:02.279266  119092 ops.go:34] apiserver oom_adj: -16
	I1101 11:19:02.279294  119092 kubeadm.go:602] duration metric: took 10.757607601s to restartPrimaryControlPlane
	I1101 11:19:02.279306  119092 kubeadm.go:403] duration metric: took 10.85886702s to StartCluster
	I1101 11:19:02.279324  119092 settings.go:142] acquiring lock: {Name:mk26e3d3b2448df59827bb1be60cde1d117dbc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:19:02.279410  119092 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-70113/kubeconfig
	I1101 11:19:02.281069  119092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/kubeconfig: {Name:mk1fa0677a0758214359bfdd6f326495ee5fd60e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:19:02.281465  119092 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.72.189 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 11:19:02.281566  119092 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 11:19:02.281667  119092 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-287419"
	I1101 11:19:02.281687  119092 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-287419"
	W1101 11:19:02.281695  119092 addons.go:248] addon storage-provisioner should already be in state true
	I1101 11:19:02.281724  119092 host.go:66] Checking if "default-k8s-diff-port-287419" exists ...
	I1101 11:19:02.281766  119092 config.go:182] Loaded profile config "default-k8s-diff-port-287419": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:19:02.281828  119092 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-287419"
	I1101 11:19:02.281853  119092 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-287419"
	I1101 11:19:02.282446  119092 addons.go:70] Setting metrics-server=true in profile "default-k8s-diff-port-287419"
	I1101 11:19:02.282468  119092 addons.go:239] Setting addon metrics-server=true in "default-k8s-diff-port-287419"
	W1101 11:19:02.282476  119092 addons.go:248] addon metrics-server should already be in state true
	I1101 11:19:02.282485  119092 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-287419"
	I1101 11:19:02.282503  119092 host.go:66] Checking if "default-k8s-diff-port-287419" exists ...
	I1101 11:19:02.282508  119092 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-287419"
	W1101 11:19:02.282518  119092 addons.go:248] addon dashboard should already be in state true
	I1101 11:19:02.282572  119092 host.go:66] Checking if "default-k8s-diff-port-287419" exists ...
	I1101 11:19:02.284444  119092 out.go:179] * Verifying Kubernetes components...
	I1101 11:19:02.285933  119092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:19:02.287770  119092 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-287419"
	W1101 11:19:02.287795  119092 addons.go:248] addon default-storageclass should already be in state true
	I1101 11:19:02.287823  119092 host.go:66] Checking if "default-k8s-diff-port-287419" exists ...
	I1101 11:19:02.288749  119092 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1101 11:19:02.288756  119092 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 11:19:02.289642  119092 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 11:19:02.290051  119092 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 11:19:02.290075  119092 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 11:19:02.290704  119092 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 11:19:02.290726  119092 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 11:19:02.290910  119092 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 11:19:02.290920  119092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 11:19:02.292097  119092 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 11:19:02.293195  119092 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 11:19:02.293214  119092 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 11:19:02.296074  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:19:02.296424  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:19:02.297057  119092 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3a:46", ip: ""} in network mk-default-k8s-diff-port-287419: {Iface:virbr4 ExpiryTime:2025-11-01 12:18:38 +0000 UTC Type:0 Mac:52:54:00:8d:3a:46 Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:default-k8s-diff-port-287419 Clientid:01:52:54:00:8d:3a:46}
	I1101 11:19:02.297090  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined IP address 192.168.72.189 and MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:19:02.297189  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:19:02.297637  119092 sshutil.go:53] new ssh client: &{IP:192.168.72.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/default-k8s-diff-port-287419/id_rsa Username:docker}
	I1101 11:19:02.298076  119092 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3a:46", ip: ""} in network mk-default-k8s-diff-port-287419: {Iface:virbr4 ExpiryTime:2025-11-01 12:18:38 +0000 UTC Type:0 Mac:52:54:00:8d:3a:46 Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:default-k8s-diff-port-287419 Clientid:01:52:54:00:8d:3a:46}
	I1101 11:19:02.298114  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined IP address 192.168.72.189 and MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:19:02.298222  119092 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3a:46", ip: ""} in network mk-default-k8s-diff-port-287419: {Iface:virbr4 ExpiryTime:2025-11-01 12:18:38 +0000 UTC Type:0 Mac:52:54:00:8d:3a:46 Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:default-k8s-diff-port-287419 Clientid:01:52:54:00:8d:3a:46}
	I1101 11:19:02.298254  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined IP address 192.168.72.189 and MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:19:02.298359  119092 sshutil.go:53] new ssh client: &{IP:192.168.72.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/default-k8s-diff-port-287419/id_rsa Username:docker}
	I1101 11:19:02.298820  119092 sshutil.go:53] new ssh client: &{IP:192.168.72.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/default-k8s-diff-port-287419/id_rsa Username:docker}
	I1101 11:19:02.299984  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:19:02.300478  119092 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8d:3a:46", ip: ""} in network mk-default-k8s-diff-port-287419: {Iface:virbr4 ExpiryTime:2025-11-01 12:18:38 +0000 UTC Type:0 Mac:52:54:00:8d:3a:46 Iaid: IPaddr:192.168.72.189 Prefix:24 Hostname:default-k8s-diff-port-287419 Clientid:01:52:54:00:8d:3a:46}
	I1101 11:19:02.300519  119092 main.go:143] libmachine: domain default-k8s-diff-port-287419 has defined IP address 192.168.72.189 and MAC address 52:54:00:8d:3a:46 in network mk-default-k8s-diff-port-287419
	I1101 11:19:02.300712  119092 sshutil.go:53] new ssh client: &{IP:192.168.72.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/default-k8s-diff-port-287419/id_rsa Username:docker}
	I1101 11:19:02.661574  119092 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:19:02.693236  119092 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-287419" to be "Ready" ...
	I1101 11:19:02.945197  119092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 11:19:02.956238  119092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 11:19:02.974795  119092 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 11:19:02.974832  119092 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 11:19:02.990318  119092 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 11:19:02.990363  119092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1101 11:19:03.107005  119092 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 11:19:03.107035  119092 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 11:19:03.108601  119092 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 11:19:03.108634  119092 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 11:19:03.248974  119092 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 11:19:03.249206  119092 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 11:19:03.250386  119092 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 11:19:03.250403  119092 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 11:19:03.380890  119092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 11:19:03.380891  119092 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 11:19:03.381051  119092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 11:19:03.493547  119092 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 11:19:03.493574  119092 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 11:19:03.614503  119092 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 11:19:03.614527  119092 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 11:19:03.712290  119092 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 11:19:03.712319  119092 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 11:19:03.761853  119092 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 11:19:03.761878  119092 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 11:19:03.844380  119092 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 11:19:03.844416  119092 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 11:19:03.915138  119092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1101 11:19:04.698774  119092 node_ready.go:57] node "default-k8s-diff-port-287419" has "Ready":"False" status (will retry)
	I1101 11:19:05.727577  119092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.771302612s)
	I1101 11:19:05.727646  119092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.346638304s)
	I1101 11:19:05.727664  119092 addons.go:480] Verifying addon metrics-server=true in "default-k8s-diff-port-287419"
	I1101 11:19:05.890894  119092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.975640964s)
	I1101 11:19:05.892383  119092 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-287419 addons enable metrics-server
	
	I1101 11:19:05.893977  119092 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1101 11:19:02.543216  119309 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 11:19:02.550967  119309 main.go:143] libmachine: domain newest-cni-268638 has defined MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:02.552473  119309 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:62:b8:3b", ip: ""} in network mk-newest-cni-268638: {Iface:virbr5 ExpiryTime:2025-11-01 12:18:59 +0000 UTC Type:0 Mac:52:54:00:62:b8:3b Iaid: IPaddr:192.168.83.241 Prefix:24 Hostname:newest-cni-268638 Clientid:01:52:54:00:62:b8:3b}
	I1101 11:19:02.552512  119309 main.go:143] libmachine: domain newest-cni-268638 has defined IP address 192.168.83.241 and MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:02.553123  119309 profile.go:143] Saving config to /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/newest-cni-268638/config.json ...
	I1101 11:19:02.553465  119309 machine.go:94] provisionDockerMachine start ...
	I1101 11:19:02.559146  119309 main.go:143] libmachine: domain newest-cni-268638 has defined MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:02.560029  119309 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:62:b8:3b", ip: ""} in network mk-newest-cni-268638: {Iface:virbr5 ExpiryTime:2025-11-01 12:18:59 +0000 UTC Type:0 Mac:52:54:00:62:b8:3b Iaid: IPaddr:192.168.83.241 Prefix:24 Hostname:newest-cni-268638 Clientid:01:52:54:00:62:b8:3b}
	I1101 11:19:02.560229  119309 main.go:143] libmachine: domain newest-cni-268638 has defined IP address 192.168.83.241 and MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:02.560723  119309 main.go:143] libmachine: Using SSH client type: native
	I1101 11:19:02.561035  119309 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.83.241 22 <nil> <nil>}
	I1101 11:19:02.561095  119309 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 11:19:02.694782  119309 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1101 11:19:02.694814  119309 buildroot.go:166] provisioning hostname "newest-cni-268638"
	I1101 11:19:02.700708  119309 main.go:143] libmachine: domain newest-cni-268638 has defined MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:02.701376  119309 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:62:b8:3b", ip: ""} in network mk-newest-cni-268638: {Iface:virbr5 ExpiryTime:2025-11-01 12:18:59 +0000 UTC Type:0 Mac:52:54:00:62:b8:3b Iaid: IPaddr:192.168.83.241 Prefix:24 Hostname:newest-cni-268638 Clientid:01:52:54:00:62:b8:3b}
	I1101 11:19:02.701434  119309 main.go:143] libmachine: domain newest-cni-268638 has defined IP address 192.168.83.241 and MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:02.701767  119309 main.go:143] libmachine: Using SSH client type: native
	I1101 11:19:02.702072  119309 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.83.241 22 <nil> <nil>}
	I1101 11:19:02.702097  119309 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-268638 && echo "newest-cni-268638" | sudo tee /etc/hostname
	I1101 11:19:02.849185  119309 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-268638
	
	I1101 11:19:02.855961  119309 main.go:143] libmachine: domain newest-cni-268638 has defined MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:02.856674  119309 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:62:b8:3b", ip: ""} in network mk-newest-cni-268638: {Iface:virbr5 ExpiryTime:2025-11-01 12:18:59 +0000 UTC Type:0 Mac:52:54:00:62:b8:3b Iaid: IPaddr:192.168.83.241 Prefix:24 Hostname:newest-cni-268638 Clientid:01:52:54:00:62:b8:3b}
	I1101 11:19:02.856715  119309 main.go:143] libmachine: domain newest-cni-268638 has defined IP address 192.168.83.241 and MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:02.856972  119309 main.go:143] libmachine: Using SSH client type: native
	I1101 11:19:02.857305  119309 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.83.241 22 <nil> <nil>}
	I1101 11:19:02.857332  119309 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-268638' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-268638/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-268638' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 11:19:03.000592  119309 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 11:19:03.000631  119309 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21830-70113/.minikube CaCertPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21830-70113/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21830-70113/.minikube}
	I1101 11:19:03.000666  119309 buildroot.go:174] setting up certificates
	I1101 11:19:03.000687  119309 provision.go:84] configureAuth start
	I1101 11:19:03.005966  119309 main.go:143] libmachine: domain newest-cni-268638 has defined MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:03.137583  119309 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:62:b8:3b", ip: ""} in network mk-newest-cni-268638: {Iface:virbr5 ExpiryTime:2025-11-01 12:18:59 +0000 UTC Type:0 Mac:52:54:00:62:b8:3b Iaid: IPaddr:192.168.83.241 Prefix:24 Hostname:newest-cni-268638 Clientid:01:52:54:00:62:b8:3b}
	I1101 11:19:03.137648  119309 main.go:143] libmachine: domain newest-cni-268638 has defined IP address 192.168.83.241 and MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:03.142322  119309 main.go:143] libmachine: domain newest-cni-268638 has defined MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:03.142942  119309 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:62:b8:3b", ip: ""} in network mk-newest-cni-268638: {Iface:virbr5 ExpiryTime:2025-11-01 12:18:59 +0000 UTC Type:0 Mac:52:54:00:62:b8:3b Iaid: IPaddr:192.168.83.241 Prefix:24 Hostname:newest-cni-268638 Clientid:01:52:54:00:62:b8:3b}
	I1101 11:19:03.142985  119309 main.go:143] libmachine: domain newest-cni-268638 has defined IP address 192.168.83.241 and MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:03.143150  119309 provision.go:143] copyHostCerts
	I1101 11:19:03.143226  119309 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-70113/.minikube/ca.pem, removing ...
	I1101 11:19:03.143244  119309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-70113/.minikube/ca.pem
	I1101 11:19:03.143337  119309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21830-70113/.minikube/ca.pem (1082 bytes)
	I1101 11:19:03.143476  119309 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-70113/.minikube/cert.pem, removing ...
	I1101 11:19:03.143489  119309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-70113/.minikube/cert.pem
	I1101 11:19:03.143548  119309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21830-70113/.minikube/cert.pem (1123 bytes)
	I1101 11:19:03.143664  119309 exec_runner.go:144] found /home/jenkins/minikube-integration/21830-70113/.minikube/key.pem, removing ...
	I1101 11:19:03.143678  119309 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21830-70113/.minikube/key.pem
	I1101 11:19:03.143720  119309 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21830-70113/.minikube/key.pem (1675 bytes)
	I1101 11:19:03.143824  119309 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21830-70113/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca-key.pem org=jenkins.newest-cni-268638 san=[127.0.0.1 192.168.83.241 localhost minikube newest-cni-268638]
	I1101 11:19:03.483327  119309 provision.go:177] copyRemoteCerts
	I1101 11:19:03.483390  119309 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 11:19:03.487133  119309 main.go:143] libmachine: domain newest-cni-268638 has defined MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:03.487719  119309 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:62:b8:3b", ip: ""} in network mk-newest-cni-268638: {Iface:virbr5 ExpiryTime:2025-11-01 12:18:59 +0000 UTC Type:0 Mac:52:54:00:62:b8:3b Iaid: IPaddr:192.168.83.241 Prefix:24 Hostname:newest-cni-268638 Clientid:01:52:54:00:62:b8:3b}
	I1101 11:19:03.487748  119309 main.go:143] libmachine: domain newest-cni-268638 has defined IP address 192.168.83.241 and MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:03.487932  119309 sshutil.go:53] new ssh client: &{IP:192.168.83.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/newest-cni-268638/id_rsa Username:docker}
	I1101 11:19:03.584204  119309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 11:19:03.628717  119309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 11:19:03.677398  119309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 11:19:03.727214  119309 provision.go:87] duration metric: took 726.505982ms to configureAuth
	I1101 11:19:03.727250  119309 buildroot.go:189] setting minikube options for container-runtime
	I1101 11:19:03.727520  119309 config.go:182] Loaded profile config "newest-cni-268638": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:19:03.731945  119309 main.go:143] libmachine: domain newest-cni-268638 has defined MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:03.732435  119309 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:62:b8:3b", ip: ""} in network mk-newest-cni-268638: {Iface:virbr5 ExpiryTime:2025-11-01 12:18:59 +0000 UTC Type:0 Mac:52:54:00:62:b8:3b Iaid: IPaddr:192.168.83.241 Prefix:24 Hostname:newest-cni-268638 Clientid:01:52:54:00:62:b8:3b}
	I1101 11:19:03.732494  119309 main.go:143] libmachine: domain newest-cni-268638 has defined IP address 192.168.83.241 and MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:03.732930  119309 main.go:143] libmachine: Using SSH client type: native
	I1101 11:19:03.733251  119309 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.83.241 22 <nil> <nil>}
	I1101 11:19:03.733290  119309 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 11:19:04.069760  119309 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 11:19:04.069790  119309 machine.go:97] duration metric: took 1.516311361s to provisionDockerMachine
	I1101 11:19:04.069821  119309 start.go:293] postStartSetup for "newest-cni-268638" (driver="kvm2")
	I1101 11:19:04.069837  119309 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 11:19:04.069910  119309 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 11:19:04.073709  119309 main.go:143] libmachine: domain newest-cni-268638 has defined MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:04.074194  119309 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:62:b8:3b", ip: ""} in network mk-newest-cni-268638: {Iface:virbr5 ExpiryTime:2025-11-01 12:18:59 +0000 UTC Type:0 Mac:52:54:00:62:b8:3b Iaid: IPaddr:192.168.83.241 Prefix:24 Hostname:newest-cni-268638 Clientid:01:52:54:00:62:b8:3b}
	I1101 11:19:04.074226  119309 main.go:143] libmachine: domain newest-cni-268638 has defined IP address 192.168.83.241 and MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:04.074554  119309 sshutil.go:53] new ssh client: &{IP:192.168.83.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/newest-cni-268638/id_rsa Username:docker}
	I1101 11:19:04.182259  119309 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 11:19:04.190057  119309 info.go:137] Remote host: Buildroot 2025.02
	I1101 11:19:04.190106  119309 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-70113/.minikube/addons for local assets ...
	I1101 11:19:04.190205  119309 filesync.go:126] Scanning /home/jenkins/minikube-integration/21830-70113/.minikube/files for local assets ...
	I1101 11:19:04.190342  119309 filesync.go:149] local asset: /home/jenkins/minikube-integration/21830-70113/.minikube/files/etc/ssl/certs/739982.pem -> 739982.pem in /etc/ssl/certs
	I1101 11:19:04.190485  119309 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 11:19:04.210151  119309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/files/etc/ssl/certs/739982.pem --> /etc/ssl/certs/739982.pem (1708 bytes)
	I1101 11:19:04.256616  119309 start.go:296] duration metric: took 186.775542ms for postStartSetup
	I1101 11:19:04.256750  119309 fix.go:56] duration metric: took 20.838506313s for fixHost
	I1101 11:19:04.260280  119309 main.go:143] libmachine: domain newest-cni-268638 has defined MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:04.260754  119309 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:62:b8:3b", ip: ""} in network mk-newest-cni-268638: {Iface:virbr5 ExpiryTime:2025-11-01 12:18:59 +0000 UTC Type:0 Mac:52:54:00:62:b8:3b Iaid: IPaddr:192.168.83.241 Prefix:24 Hostname:newest-cni-268638 Clientid:01:52:54:00:62:b8:3b}
	I1101 11:19:04.260788  119309 main.go:143] libmachine: domain newest-cni-268638 has defined IP address 192.168.83.241 and MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:04.260992  119309 main.go:143] libmachine: Using SSH client type: native
	I1101 11:19:04.261266  119309 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.83.241 22 <nil> <nil>}
	I1101 11:19:04.261283  119309 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1101 11:19:04.389074  119309 main.go:143] libmachine: SSH cmd err, output: <nil>: 1761995944.352514696
	
	I1101 11:19:04.389102  119309 fix.go:216] guest clock: 1761995944.352514696
	I1101 11:19:04.389112  119309 fix.go:229] Guest: 2025-11-01 11:19:04.352514696 +0000 UTC Remote: 2025-11-01 11:19:04.256761907 +0000 UTC m=+37.752831701 (delta=95.752789ms)
	I1101 11:19:04.389135  119309 fix.go:200] guest clock delta is within tolerance: 95.752789ms
	I1101 11:19:04.389143  119309 start.go:83] releasing machines lock for "newest-cni-268638", held for 20.97092735s
	I1101 11:19:04.394244  119309 main.go:143] libmachine: domain newest-cni-268638 has defined MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:04.394978  119309 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:62:b8:3b", ip: ""} in network mk-newest-cni-268638: {Iface:virbr5 ExpiryTime:2025-11-01 12:18:59 +0000 UTC Type:0 Mac:52:54:00:62:b8:3b Iaid: IPaddr:192.168.83.241 Prefix:24 Hostname:newest-cni-268638 Clientid:01:52:54:00:62:b8:3b}
	I1101 11:19:04.395042  119309 main.go:143] libmachine: domain newest-cni-268638 has defined IP address 192.168.83.241 and MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:04.396018  119309 ssh_runner.go:195] Run: cat /version.json
	I1101 11:19:04.396716  119309 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 11:19:04.404825  119309 main.go:143] libmachine: domain newest-cni-268638 has defined MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:04.405620  119309 main.go:143] libmachine: domain newest-cni-268638 has defined MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:04.406188  119309 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:62:b8:3b", ip: ""} in network mk-newest-cni-268638: {Iface:virbr5 ExpiryTime:2025-11-01 12:18:59 +0000 UTC Type:0 Mac:52:54:00:62:b8:3b Iaid: IPaddr:192.168.83.241 Prefix:24 Hostname:newest-cni-268638 Clientid:01:52:54:00:62:b8:3b}
	I1101 11:19:04.406227  119309 main.go:143] libmachine: domain newest-cni-268638 has defined IP address 192.168.83.241 and MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:04.406424  119309 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:62:b8:3b", ip: ""} in network mk-newest-cni-268638: {Iface:virbr5 ExpiryTime:2025-11-01 12:18:59 +0000 UTC Type:0 Mac:52:54:00:62:b8:3b Iaid: IPaddr:192.168.83.241 Prefix:24 Hostname:newest-cni-268638 Clientid:01:52:54:00:62:b8:3b}
	I1101 11:19:04.406458  119309 main.go:143] libmachine: domain newest-cni-268638 has defined IP address 192.168.83.241 and MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:04.406849  119309 sshutil.go:53] new ssh client: &{IP:192.168.83.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/newest-cni-268638/id_rsa Username:docker}
	I1101 11:19:04.406949  119309 sshutil.go:53] new ssh client: &{IP:192.168.83.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/newest-cni-268638/id_rsa Username:docker}
	I1101 11:19:04.501378  119309 ssh_runner.go:195] Run: systemctl --version
	I1101 11:19:04.534937  119309 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 11:19:04.753637  119309 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 11:19:04.764918  119309 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 11:19:04.765041  119309 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 11:19:04.790985  119309 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 11:19:04.791020  119309 start.go:496] detecting cgroup driver to use...
	I1101 11:19:04.791109  119309 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 11:19:04.816639  119309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 11:19:04.837643  119309 docker.go:218] disabling cri-docker service (if available) ...
	I1101 11:19:04.837726  119309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 11:19:04.859128  119309 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 11:19:04.880452  119309 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 11:19:05.073252  119309 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 11:19:05.345388  119309 docker.go:234] disabling docker service ...
	I1101 11:19:05.345473  119309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 11:19:05.371588  119309 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 11:19:05.391751  119309 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 11:19:05.596336  119309 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 11:19:05.800163  119309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 11:19:05.826270  119309 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 11:19:05.863203  119309 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 11:19:05.863270  119309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:19:05.878405  119309 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 11:19:05.878475  119309 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:19:05.894648  119309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:19:05.909477  119309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:19:05.925137  119309 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 11:19:05.946319  119309 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:19:05.964326  119309 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:19:05.993100  119309 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 11:19:06.009398  119309 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 11:19:06.023888  119309 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 11:19:06.023972  119309 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 11:19:06.049290  119309 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 11:19:06.064426  119309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:19:06.214825  119309 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 11:19:06.349129  119309 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 11:19:06.349210  119309 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 11:19:06.355161  119309 start.go:564] Will wait 60s for crictl version
	I1101 11:19:06.355232  119309 ssh_runner.go:195] Run: which crictl
	I1101 11:19:06.359672  119309 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 11:19:06.407252  119309 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1101 11:19:06.407348  119309 ssh_runner.go:195] Run: crio --version
	I1101 11:19:06.443806  119309 ssh_runner.go:195] Run: crio --version
	I1101 11:19:06.483519  119309 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1101 11:19:06.487909  119309 main.go:143] libmachine: domain newest-cni-268638 has defined MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:06.488524  119309 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:62:b8:3b", ip: ""} in network mk-newest-cni-268638: {Iface:virbr5 ExpiryTime:2025-11-01 12:18:59 +0000 UTC Type:0 Mac:52:54:00:62:b8:3b Iaid: IPaddr:192.168.83.241 Prefix:24 Hostname:newest-cni-268638 Clientid:01:52:54:00:62:b8:3b}
	I1101 11:19:06.488588  119309 main.go:143] libmachine: domain newest-cni-268638 has defined IP address 192.168.83.241 and MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:06.488858  119309 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1101 11:19:06.494322  119309 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 11:19:06.515748  119309 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1101 11:19:06.517187  119309 kubeadm.go:884] updating cluster {Name:newest-cni-268638 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.34.1 ClusterName:newest-cni-268638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.241 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<
nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 11:19:06.517334  119309 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 11:19:06.517404  119309 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 11:19:06.569300  119309 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1101 11:19:06.569384  119309 ssh_runner.go:195] Run: which lz4
	I1101 11:19:06.575446  119309 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1101 11:19:05.895169  119092 addons.go:515] duration metric: took 3.613636476s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	W1101 11:19:07.199115  119092 node_ready.go:57] node "default-k8s-diff-port-287419" has "Ready":"False" status (will retry)
	W1101 11:19:09.697424  119092 node_ready.go:57] node "default-k8s-diff-port-287419" has "Ready":"False" status (will retry)
	I1101 11:19:10.203943  119092 node_ready.go:49] node "default-k8s-diff-port-287419" is "Ready"
	I1101 11:19:10.203984  119092 node_ready.go:38] duration metric: took 7.510699569s for node "default-k8s-diff-port-287419" to be "Ready" ...
	I1101 11:19:10.203999  119092 api_server.go:52] waiting for apiserver process to appear ...
	I1101 11:19:10.204057  119092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:19:10.243422  119092 api_server.go:72] duration metric: took 7.961877658s to wait for apiserver process to appear ...
	I1101 11:19:10.243453  119092 api_server.go:88] waiting for apiserver healthz status ...
	I1101 11:19:10.243478  119092 api_server.go:253] Checking apiserver healthz at https://192.168.72.189:8444/healthz ...
	I1101 11:19:10.255943  119092 api_server.go:279] https://192.168.72.189:8444/healthz returned 200:
	ok
	I1101 11:19:10.257571  119092 api_server.go:141] control plane version: v1.34.1
	I1101 11:19:10.257607  119092 api_server.go:131] duration metric: took 14.143902ms to wait for apiserver health ...
	I1101 11:19:10.257620  119092 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 11:19:10.262958  119092 system_pods.go:59] 8 kube-system pods found
	I1101 11:19:10.262997  119092 system_pods.go:61] "coredns-66bc5c9577-drlhc" [2fe001ab-c59d-4a12-9897-d7d2869a1af8] Running
	I1101 11:19:10.263005  119092 system_pods.go:61] "etcd-default-k8s-diff-port-287419" [67bd5955-ba6e-4d48-a952-857e719ddcb6] Running
	I1101 11:19:10.263016  119092 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-287419" [c8154e49-5eed-4825-b594-e588075878ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 11:19:10.263023  119092 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-287419" [02a42753-0962-4d25-b898-43759f929c36] Running
	I1101 11:19:10.263041  119092 system_pods.go:61] "kube-proxy-lhjdx" [63b7c2eb-cdb2-4318-bef4-e95e3e478fb6] Running
	I1101 11:19:10.263049  119092 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-287419" [49ee2304-24ca-4a26-8b1c-9f59d8281dea] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 11:19:10.263057  119092 system_pods.go:61] "metrics-server-746fcd58dc-zmbnr" [ffa3dd51-bf02-44da-800d-f8d714bc1b36] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 11:19:10.263083  119092 system_pods.go:61] "storage-provisioner" [4a047ac3-d0c4-448e-8066-5a3ccd78fcc1] Running
	I1101 11:19:10.263091  119092 system_pods.go:74] duration metric: took 5.462284ms to wait for pod list to return data ...
	I1101 11:19:10.263101  119092 default_sa.go:34] waiting for default service account to be created ...
	I1101 11:19:10.267391  119092 default_sa.go:45] found service account: "default"
	I1101 11:19:10.267574  119092 default_sa.go:55] duration metric: took 4.460174ms for default service account to be created ...
	I1101 11:19:10.267600  119092 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 11:19:06.581287  119309 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 11:19:06.581331  119309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1101 11:19:08.409367  119309 crio.go:462] duration metric: took 1.83395154s to copy over tarball
	I1101 11:19:08.409456  119309 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 11:19:10.402378  119309 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.992885247s)
	I1101 11:19:10.402419  119309 crio.go:469] duration metric: took 1.993018787s to extract the tarball
	I1101 11:19:10.402431  119309 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 11:19:10.449439  119309 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 11:19:10.505411  119309 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 11:19:10.505442  119309 cache_images.go:86] Images are preloaded, skipping loading
	I1101 11:19:10.505455  119309 kubeadm.go:935] updating node { 192.168.83.241 8443 v1.34.1 crio true true} ...
	I1101 11:19:10.505632  119309 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-268638 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.241
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-268638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 11:19:10.505740  119309 ssh_runner.go:195] Run: crio config
	I1101 11:19:10.565423  119309 cni.go:84] Creating CNI manager for ""
	I1101 11:19:10.565452  119309 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 11:19:10.565474  119309 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1101 11:19:10.565511  119309 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.83.241 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-268638 NodeName:newest-cni-268638 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.241"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.241 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 11:19:10.565743  119309 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.241
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-268638"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.83.241"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.241"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 11:19:10.565841  119309 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 11:19:10.579061  119309 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 11:19:10.579148  119309 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 11:19:10.598798  119309 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1101 11:19:10.629409  119309 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 11:19:10.654108  119309 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1101 11:19:10.678258  119309 ssh_runner.go:195] Run: grep 192.168.83.241	control-plane.minikube.internal$ /etc/hosts
	I1101 11:19:10.685115  119309 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.241	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 11:19:10.708632  119309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:19:10.869819  119309 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:19:10.895714  119309 certs.go:69] Setting up /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/newest-cni-268638 for IP: 192.168.83.241
	I1101 11:19:10.895744  119309 certs.go:195] generating shared ca certs ...
	I1101 11:19:10.895769  119309 certs.go:227] acquiring lock for ca certs: {Name:mk20731b316fbc22c351241cefc40924880eeba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:19:10.895939  119309 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21830-70113/.minikube/ca.key
	I1101 11:19:10.896003  119309 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.key
	I1101 11:19:10.896020  119309 certs.go:257] generating profile certs ...
	I1101 11:19:10.896175  119309 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/newest-cni-268638/client.key
	I1101 11:19:10.896257  119309 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/newest-cni-268638/apiserver.key.2629d584
	I1101 11:19:10.896306  119309 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/newest-cni-268638/proxy-client.key
	I1101 11:19:10.896465  119309 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/73998.pem (1338 bytes)
	W1101 11:19:10.896510  119309 certs.go:480] ignoring /home/jenkins/minikube-integration/21830-70113/.minikube/certs/73998_empty.pem, impossibly tiny 0 bytes
	I1101 11:19:10.896522  119309 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 11:19:10.896572  119309 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/ca.pem (1082 bytes)
	I1101 11:19:10.896604  119309 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/cert.pem (1123 bytes)
	I1101 11:19:10.896641  119309 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/certs/key.pem (1675 bytes)
	I1101 11:19:10.896708  119309 certs.go:484] found cert: /home/jenkins/minikube-integration/21830-70113/.minikube/files/etc/ssl/certs/739982.pem (1708 bytes)
	I1101 11:19:10.897339  119309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 11:19:10.956463  119309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 11:19:11.002135  119309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 11:19:11.038484  119309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 11:19:11.076307  119309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/newest-cni-268638/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 11:19:11.109072  119309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/newest-cni-268638/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 11:19:11.141404  119309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/newest-cni-268638/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 11:19:11.177071  119309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/newest-cni-268638/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 11:19:11.209770  119309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/certs/73998.pem --> /usr/share/ca-certificates/73998.pem (1338 bytes)
	I1101 11:19:11.243590  119309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/files/etc/ssl/certs/739982.pem --> /usr/share/ca-certificates/739982.pem (1708 bytes)
	I1101 11:19:11.281067  119309 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21830-70113/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 11:19:11.317422  119309 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 11:19:11.345658  119309 ssh_runner.go:195] Run: openssl version
	I1101 11:19:11.354713  119309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/73998.pem && ln -fs /usr/share/ca-certificates/73998.pem /etc/ssl/certs/73998.pem"
	I1101 11:19:11.370673  119309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/73998.pem
	I1101 11:19:11.377645  119309 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 10:03 /usr/share/ca-certificates/73998.pem
	I1101 11:19:11.377727  119309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/73998.pem
	I1101 11:19:11.387892  119309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/73998.pem /etc/ssl/certs/51391683.0"
	I1101 11:19:11.406129  119309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/739982.pem && ln -fs /usr/share/ca-certificates/739982.pem /etc/ssl/certs/739982.pem"
	I1101 11:19:11.422341  119309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/739982.pem
	I1101 11:19:11.428443  119309 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 10:03 /usr/share/ca-certificates/739982.pem
	I1101 11:19:11.428508  119309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/739982.pem
	I1101 11:19:11.436762  119309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/739982.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 11:19:11.452044  119309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 11:19:11.467992  119309 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:19:11.474024  119309 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:50 /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:19:11.474101  119309 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 11:19:11.483025  119309 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 11:19:11.499055  119309 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 11:19:11.505134  119309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 11:19:11.514907  119309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 11:19:11.523957  119309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 11:19:11.532373  119309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 11:19:11.541455  119309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 11:19:11.550756  119309 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 11:19:11.560282  119309 kubeadm.go:401] StartCluster: {Name:newest-cni-268638 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.1 ClusterName:newest-cni-268638 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.241 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil
> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 11:19:11.560393  119309 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 11:19:11.560464  119309 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 11:19:10.273597  119092 system_pods.go:86] 8 kube-system pods found
	I1101 11:19:10.273630  119092 system_pods.go:89] "coredns-66bc5c9577-drlhc" [2fe001ab-c59d-4a12-9897-d7d2869a1af8] Running
	I1101 11:19:10.273638  119092 system_pods.go:89] "etcd-default-k8s-diff-port-287419" [67bd5955-ba6e-4d48-a952-857e719ddcb6] Running
	I1101 11:19:10.273650  119092 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-287419" [c8154e49-5eed-4825-b594-e588075878ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 11:19:10.273662  119092 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-287419" [02a42753-0962-4d25-b898-43759f929c36] Running
	I1101 11:19:10.273671  119092 system_pods.go:89] "kube-proxy-lhjdx" [63b7c2eb-cdb2-4318-bef4-e95e3e478fb6] Running
	I1101 11:19:10.273679  119092 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-287419" [49ee2304-24ca-4a26-8b1c-9f59d8281dea] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 11:19:10.273737  119092 system_pods.go:89] "metrics-server-746fcd58dc-zmbnr" [ffa3dd51-bf02-44da-800d-f8d714bc1b36] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 11:19:10.273749  119092 system_pods.go:89] "storage-provisioner" [4a047ac3-d0c4-448e-8066-5a3ccd78fcc1] Running
	I1101 11:19:10.273771  119092 system_pods.go:126] duration metric: took 6.161144ms to wait for k8s-apps to be running ...
	I1101 11:19:10.273783  119092 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 11:19:10.273846  119092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 11:19:10.302853  119092 system_svc.go:56] duration metric: took 29.056278ms WaitForService to wait for kubelet
	I1101 11:19:10.302889  119092 kubeadm.go:587] duration metric: took 8.021353572s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 11:19:10.302910  119092 node_conditions.go:102] verifying NodePressure condition ...
	I1101 11:19:10.308609  119092 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 11:19:10.308637  119092 node_conditions.go:123] node cpu capacity is 2
	I1101 11:19:10.308653  119092 node_conditions.go:105] duration metric: took 5.737557ms to run NodePressure ...
	I1101 11:19:10.308671  119092 start.go:242] waiting for startup goroutines ...
	I1101 11:19:10.308681  119092 start.go:247] waiting for cluster config update ...
	I1101 11:19:10.308695  119092 start.go:256] writing updated cluster config ...
	I1101 11:19:10.309102  119092 ssh_runner.go:195] Run: rm -f paused
	I1101 11:19:10.317997  119092 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 11:19:10.324619  119092 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-drlhc" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:19:10.334515  119092 pod_ready.go:94] pod "coredns-66bc5c9577-drlhc" is "Ready"
	I1101 11:19:10.334581  119092 pod_ready.go:86] duration metric: took 9.911389ms for pod "coredns-66bc5c9577-drlhc" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:19:10.339962  119092 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-287419" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:19:10.348425  119092 pod_ready.go:94] pod "etcd-default-k8s-diff-port-287419" is "Ready"
	I1101 11:19:10.348455  119092 pod_ready.go:86] duration metric: took 8.464953ms for pod "etcd-default-k8s-diff-port-287419" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:19:10.352948  119092 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-287419" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 11:19:12.445160  119092 pod_ready.go:104] pod "kube-apiserver-default-k8s-diff-port-287419" is not "Ready", error: <nil>
	I1101 11:19:13.294339  119092 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-287419" is "Ready"
	I1101 11:19:13.294377  119092 pod_ready.go:86] duration metric: took 2.941390708s for pod "kube-apiserver-default-k8s-diff-port-287419" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:19:13.298088  119092 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-287419" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:19:13.305389  119092 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-287419" is "Ready"
	I1101 11:19:13.305420  119092 pod_ready.go:86] duration metric: took 7.301513ms for pod "kube-controller-manager-default-k8s-diff-port-287419" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:19:13.310886  119092 pod_ready.go:83] waiting for pod "kube-proxy-lhjdx" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:19:13.329132  119092 pod_ready.go:94] pod "kube-proxy-lhjdx" is "Ready"
	I1101 11:19:13.329158  119092 pod_ready.go:86] duration metric: took 18.248938ms for pod "kube-proxy-lhjdx" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:19:13.529304  119092 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-287419" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:19:13.925193  119092 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-287419" is "Ready"
	I1101 11:19:13.925231  119092 pod_ready.go:86] duration metric: took 395.894846ms for pod "kube-scheduler-default-k8s-diff-port-287419" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 11:19:13.925249  119092 pod_ready.go:40] duration metric: took 3.607204823s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 11:19:13.973382  119092 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 11:19:13.975205  119092 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-287419" cluster and "default" namespace by default
	I1101 11:19:11.609666  119309 cri.go:89] found id: ""
	I1101 11:19:11.609741  119309 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 11:19:11.630306  119309 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 11:19:11.630327  119309 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 11:19:11.630375  119309 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 11:19:11.651352  119309 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 11:19:11.652218  119309 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-268638" does not appear in /home/jenkins/minikube-integration/21830-70113/kubeconfig
	I1101 11:19:11.652756  119309 kubeconfig.go:62] /home/jenkins/minikube-integration/21830-70113/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-268638" cluster setting kubeconfig missing "newest-cni-268638" context setting]
	I1101 11:19:11.653466  119309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/kubeconfig: {Name:mk1fa0677a0758214359bfdd6f326495ee5fd60e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:19:11.710978  119309 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 11:19:11.725584  119309 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.83.241
	I1101 11:19:11.725625  119309 kubeadm.go:1161] stopping kube-system containers ...
	I1101 11:19:11.725642  119309 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1101 11:19:11.725705  119309 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 11:19:11.772747  119309 cri.go:89] found id: ""
	I1101 11:19:11.772848  119309 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 11:19:11.795471  119309 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 11:19:11.808762  119309 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 11:19:11.808848  119309 kubeadm.go:158] found existing configuration files:
	
	I1101 11:19:11.808917  119309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 11:19:11.821235  119309 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 11:19:11.821307  119309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 11:19:11.835553  119309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 11:19:11.848021  119309 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 11:19:11.848115  119309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 11:19:11.863170  119309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 11:19:11.875313  119309 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 11:19:11.875380  119309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 11:19:11.890693  119309 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 11:19:11.906182  119309 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 11:19:11.906256  119309 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 11:19:11.919826  119309 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 11:19:11.934053  119309 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:19:12.015579  119309 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:19:14.342020  119309 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.326399831s)
	I1101 11:19:14.342088  119309 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:19:14.660586  119309 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:19:14.742015  119309 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:19:14.838918  119309 api_server.go:52] waiting for apiserver process to appear ...
	I1101 11:19:14.839004  119309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:19:15.339395  119309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:19:15.839460  119309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:19:16.340084  119309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:19:16.373447  119309 api_server.go:72] duration metric: took 1.53453739s to wait for apiserver process to appear ...
	I1101 11:19:16.373485  119309 api_server.go:88] waiting for apiserver healthz status ...
	I1101 11:19:16.373512  119309 api_server.go:253] Checking apiserver healthz at https://192.168.83.241:8443/healthz ...
	I1101 11:19:19.157737  119309 api_server.go:279] https://192.168.83.241:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 11:19:19.157765  119309 api_server.go:103] status: https://192.168.83.241:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 11:19:19.157779  119309 api_server.go:253] Checking apiserver healthz at https://192.168.83.241:8443/healthz ...
	I1101 11:19:19.342659  119309 api_server.go:279] https://192.168.83.241:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:19:19.342701  119309 api_server.go:103] status: https://192.168.83.241:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:19:19.374013  119309 api_server.go:253] Checking apiserver healthz at https://192.168.83.241:8443/healthz ...
	I1101 11:19:19.397242  119309 api_server.go:279] https://192.168.83.241:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:19:19.397282  119309 api_server.go:103] status: https://192.168.83.241:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:19:19.873795  119309 api_server.go:253] Checking apiserver healthz at https://192.168.83.241:8443/healthz ...
	I1101 11:19:19.880439  119309 api_server.go:279] https://192.168.83.241:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:19:19.880468  119309 api_server.go:103] status: https://192.168.83.241:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:19:20.373786  119309 api_server.go:253] Checking apiserver healthz at https://192.168.83.241:8443/healthz ...
	I1101 11:19:20.383394  119309 api_server.go:279] https://192.168.83.241:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 11:19:20.383440  119309 api_server.go:103] status: https://192.168.83.241:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 11:19:20.874090  119309 api_server.go:253] Checking apiserver healthz at https://192.168.83.241:8443/healthz ...
	I1101 11:19:20.886513  119309 api_server.go:279] https://192.168.83.241:8443/healthz returned 200:
	ok
	I1101 11:19:20.897302  119309 api_server.go:141] control plane version: v1.34.1
	I1101 11:19:20.897342  119309 api_server.go:131] duration metric: took 4.523847623s to wait for apiserver health ...
	I1101 11:19:20.897356  119309 cni.go:84] Creating CNI manager for ""
	I1101 11:19:20.897364  119309 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 11:19:20.899671  119309 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 11:19:20.901215  119309 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 11:19:20.917189  119309 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1101 11:19:20.967064  119309 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 11:19:20.973563  119309 system_pods.go:59] 8 kube-system pods found
	I1101 11:19:20.973612  119309 system_pods.go:61] "coredns-66bc5c9577-x5nfd" [acc63001-4d92-4ca1-ac5d-7a0e2c4a25a8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 11:19:20.973625  119309 system_pods.go:61] "etcd-newest-cni-268638" [b62e4b95-ef59-4654-9898-ba8e0fff3055] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 11:19:20.973637  119309 system_pods.go:61] "kube-apiserver-newest-cni-268638" [0bbccadf-5e63-497e-99de-df7df8aaf3d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 11:19:20.973651  119309 system_pods.go:61] "kube-controller-manager-newest-cni-268638" [ee7753cb-2e9a-4a99-bad6-c6f735170567] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 11:19:20.973662  119309 system_pods.go:61] "kube-proxy-p5ldr" [04b69050-8b02-418a-9872-92d2559f8b82] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 11:19:20.973678  119309 system_pods.go:61] "kube-scheduler-newest-cni-268638" [b1ebd08f-c186-4b93-8642-5f9acc2eef2c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 11:19:20.973693  119309 system_pods.go:61] "metrics-server-746fcd58dc-mv8ln" [59cbbd00-75e5-4542-97f4-c810a5533e4f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 11:19:20.973700  119309 system_pods.go:61] "storage-provisioner" [451e647d-8388-4461-a4a9-09b930bc3f87] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 11:19:20.973710  119309 system_pods.go:74] duration metric: took 6.623313ms to wait for pod list to return data ...
	I1101 11:19:20.973722  119309 node_conditions.go:102] verifying NodePressure condition ...
	I1101 11:19:20.978710  119309 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 11:19:20.978734  119309 node_conditions.go:123] node cpu capacity is 2
	I1101 11:19:20.978745  119309 node_conditions.go:105] duration metric: took 5.017939ms to run NodePressure ...
	I1101 11:19:20.978794  119309 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 11:19:21.345717  119309 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 11:19:21.364040  119309 ops.go:34] apiserver oom_adj: -16
	I1101 11:19:21.364072  119309 kubeadm.go:602] duration metric: took 9.73373589s to restartPrimaryControlPlane
	I1101 11:19:21.364088  119309 kubeadm.go:403] duration metric: took 9.803815673s to StartCluster
	I1101 11:19:21.364112  119309 settings.go:142] acquiring lock: {Name:mk26e3d3b2448df59827bb1be60cde1d117dbc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:19:21.364206  119309 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21830-70113/kubeconfig
	I1101 11:19:21.365717  119309 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21830-70113/kubeconfig: {Name:mk1fa0677a0758214359bfdd6f326495ee5fd60e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 11:19:21.366054  119309 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.83.241 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 11:19:21.366163  119309 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 11:19:21.366280  119309 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-268638"
	I1101 11:19:21.366314  119309 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-268638"
	W1101 11:19:21.366328  119309 addons.go:248] addon storage-provisioner should already be in state true
	I1101 11:19:21.366362  119309 host.go:66] Checking if "newest-cni-268638" exists ...
	I1101 11:19:21.366362  119309 addons.go:70] Setting default-storageclass=true in profile "newest-cni-268638"
	I1101 11:19:21.366386  119309 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-268638"
	I1101 11:19:21.366403  119309 addons.go:70] Setting metrics-server=true in profile "newest-cni-268638"
	I1101 11:19:21.366447  119309 addons.go:239] Setting addon metrics-server=true in "newest-cni-268638"
	W1101 11:19:21.366460  119309 addons.go:248] addon metrics-server should already be in state true
	I1101 11:19:21.366493  119309 host.go:66] Checking if "newest-cni-268638" exists ...
	I1101 11:19:21.366605  119309 addons.go:70] Setting dashboard=true in profile "newest-cni-268638"
	I1101 11:19:21.366647  119309 addons.go:239] Setting addon dashboard=true in "newest-cni-268638"
	W1101 11:19:21.366657  119309 addons.go:248] addon dashboard should already be in state true
	I1101 11:19:21.366689  119309 host.go:66] Checking if "newest-cni-268638" exists ...
	I1101 11:19:21.366414  119309 config.go:182] Loaded profile config "newest-cni-268638": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:19:21.368296  119309 out.go:179] * Verifying Kubernetes components...
	I1101 11:19:21.369941  119309 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 11:19:21.370975  119309 addons.go:239] Setting addon default-storageclass=true in "newest-cni-268638"
	W1101 11:19:21.371000  119309 addons.go:248] addon default-storageclass should already be in state true
	I1101 11:19:21.371027  119309 host.go:66] Checking if "newest-cni-268638" exists ...
	I1101 11:19:21.371426  119309 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1101 11:19:21.371435  119309 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 11:19:21.371468  119309 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 11:19:21.372639  119309 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 11:19:21.372689  119309 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 11:19:21.372698  119309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 11:19:21.372674  119309 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 11:19:21.372902  119309 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 11:19:21.372918  119309 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 11:19:21.373807  119309 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 11:19:21.375329  119309 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 11:19:21.375353  119309 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 11:19:21.377130  119309 main.go:143] libmachine: domain newest-cni-268638 has defined MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:21.377302  119309 main.go:143] libmachine: domain newest-cni-268638 has defined MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:21.377598  119309 main.go:143] libmachine: domain newest-cni-268638 has defined MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:21.378149  119309 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:62:b8:3b", ip: ""} in network mk-newest-cni-268638: {Iface:virbr5 ExpiryTime:2025-11-01 12:18:59 +0000 UTC Type:0 Mac:52:54:00:62:b8:3b Iaid: IPaddr:192.168.83.241 Prefix:24 Hostname:newest-cni-268638 Clientid:01:52:54:00:62:b8:3b}
	I1101 11:19:21.378182  119309 main.go:143] libmachine: domain newest-cni-268638 has defined IP address 192.168.83.241 and MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:21.378338  119309 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:62:b8:3b", ip: ""} in network mk-newest-cni-268638: {Iface:virbr5 ExpiryTime:2025-11-01 12:18:59 +0000 UTC Type:0 Mac:52:54:00:62:b8:3b Iaid: IPaddr:192.168.83.241 Prefix:24 Hostname:newest-cni-268638 Clientid:01:52:54:00:62:b8:3b}
	I1101 11:19:21.378371  119309 main.go:143] libmachine: domain newest-cni-268638 has defined IP address 192.168.83.241 and MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:21.378455  119309 sshutil.go:53] new ssh client: &{IP:192.168.83.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/newest-cni-268638/id_rsa Username:docker}
	I1101 11:19:21.378763  119309 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:62:b8:3b", ip: ""} in network mk-newest-cni-268638: {Iface:virbr5 ExpiryTime:2025-11-01 12:18:59 +0000 UTC Type:0 Mac:52:54:00:62:b8:3b Iaid: IPaddr:192.168.83.241 Prefix:24 Hostname:newest-cni-268638 Clientid:01:52:54:00:62:b8:3b}
	I1101 11:19:21.378804  119309 main.go:143] libmachine: domain newest-cni-268638 has defined IP address 192.168.83.241 and MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:21.378796  119309 sshutil.go:53] new ssh client: &{IP:192.168.83.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/newest-cni-268638/id_rsa Username:docker}
	I1101 11:19:21.379105  119309 sshutil.go:53] new ssh client: &{IP:192.168.83.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/newest-cni-268638/id_rsa Username:docker}
	I1101 11:19:21.380133  119309 main.go:143] libmachine: domain newest-cni-268638 has defined MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:21.380622  119309 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:62:b8:3b", ip: ""} in network mk-newest-cni-268638: {Iface:virbr5 ExpiryTime:2025-11-01 12:18:59 +0000 UTC Type:0 Mac:52:54:00:62:b8:3b Iaid: IPaddr:192.168.83.241 Prefix:24 Hostname:newest-cni-268638 Clientid:01:52:54:00:62:b8:3b}
	I1101 11:19:21.380658  119309 main.go:143] libmachine: domain newest-cni-268638 has defined IP address 192.168.83.241 and MAC address 52:54:00:62:b8:3b in network mk-newest-cni-268638
	I1101 11:19:21.380925  119309 sshutil.go:53] new ssh client: &{IP:192.168.83.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/newest-cni-268638/id_rsa Username:docker}
	I1101 11:19:21.659867  119309 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 11:19:21.685760  119309 api_server.go:52] waiting for apiserver process to appear ...
	I1101 11:19:21.685860  119309 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 11:19:21.729512  119309 api_server.go:72] duration metric: took 363.407154ms to wait for apiserver process to appear ...
	I1101 11:19:21.729556  119309 api_server.go:88] waiting for apiserver healthz status ...
	I1101 11:19:21.729579  119309 api_server.go:253] Checking apiserver healthz at https://192.168.83.241:8443/healthz ...
	I1101 11:19:21.748315  119309 api_server.go:279] https://192.168.83.241:8443/healthz returned 200:
	ok
	I1101 11:19:21.749440  119309 api_server.go:141] control plane version: v1.34.1
	I1101 11:19:21.749466  119309 api_server.go:131] duration metric: took 19.901219ms to wait for apiserver health ...
	I1101 11:19:21.749475  119309 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 11:19:21.757166  119309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 11:19:21.761479  119309 system_pods.go:59] 8 kube-system pods found
	I1101 11:19:21.761520  119309 system_pods.go:61] "coredns-66bc5c9577-x5nfd" [acc63001-4d92-4ca1-ac5d-7a0e2c4a25a8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 11:19:21.761544  119309 system_pods.go:61] "etcd-newest-cni-268638" [b62e4b95-ef59-4654-9898-ba8e0fff3055] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 11:19:21.761559  119309 system_pods.go:61] "kube-apiserver-newest-cni-268638" [0bbccadf-5e63-497e-99de-df7df8aaf3d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 11:19:21.761570  119309 system_pods.go:61] "kube-controller-manager-newest-cni-268638" [ee7753cb-2e9a-4a99-bad6-c6f735170567] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 11:19:21.761578  119309 system_pods.go:61] "kube-proxy-p5ldr" [04b69050-8b02-418a-9872-92d2559f8b82] Running
	I1101 11:19:21.761591  119309 system_pods.go:61] "kube-scheduler-newest-cni-268638" [b1ebd08f-c186-4b93-8642-5f9acc2eef2c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 11:19:21.761606  119309 system_pods.go:61] "metrics-server-746fcd58dc-mv8ln" [59cbbd00-75e5-4542-97f4-c810a5533e4f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 11:19:21.761616  119309 system_pods.go:61] "storage-provisioner" [451e647d-8388-4461-a4a9-09b930bc3f87] Running
	I1101 11:19:21.761625  119309 system_pods.go:74] duration metric: took 12.142599ms to wait for pod list to return data ...
	I1101 11:19:21.761639  119309 default_sa.go:34] waiting for default service account to be created ...
	I1101 11:19:21.770025  119309 default_sa.go:45] found service account: "default"
	I1101 11:19:21.770061  119309 default_sa.go:55] duration metric: took 8.413855ms for default service account to be created ...
	I1101 11:19:21.770078  119309 kubeadm.go:587] duration metric: took 403.980934ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 11:19:21.770099  119309 node_conditions.go:102] verifying NodePressure condition ...
	I1101 11:19:21.775874  119309 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 11:19:21.775904  119309 node_conditions.go:123] node cpu capacity is 2
	I1101 11:19:21.775922  119309 node_conditions.go:105] duration metric: took 5.815749ms to run NodePressure ...
	I1101 11:19:21.775938  119309 start.go:242] waiting for startup goroutines ...
	I1101 11:19:21.842846  119309 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 11:19:21.842874  119309 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 11:19:21.844254  119309 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 11:19:21.844279  119309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1101 11:19:21.849369  119309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 11:19:21.929460  119309 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 11:19:21.929491  119309 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 11:19:21.950803  119309 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 11:19:21.950840  119309 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 11:19:22.007473  119309 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 11:19:22.007504  119309 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 11:19:22.080939  119309 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 11:19:22.080965  119309 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 11:19:22.099491  119309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 11:19:22.201305  119309 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 11:19:22.201329  119309 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 11:19:22.292063  119309 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 11:19:22.292100  119309 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 11:19:22.323098  119309 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 11:19:22.323129  119309 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 11:19:22.355259  119309 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 11:19:22.355296  119309 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 11:19:22.439275  119309 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 11:19:22.439303  119309 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 11:19:22.483728  119309 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 11:19:22.483771  119309 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 11:19:22.540647  119309 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 11:19:23.661781  119309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.904570562s)
	I1101 11:19:23.661937  119309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.812533267s)
	I1101 11:19:23.760490  119309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.660956623s)
	I1101 11:19:23.760542  119309 addons.go:480] Verifying addon metrics-server=true in "newest-cni-268638"
	I1101 11:19:23.970058  119309 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.429352819s)
	I1101 11:19:23.971645  119309 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-268638 addons enable metrics-server
	
	I1101 11:19:23.973116  119309 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1101 11:19:23.975292  119309 addons.go:515] duration metric: took 2.60913809s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1101 11:19:23.975347  119309 start.go:247] waiting for cluster config update ...
	I1101 11:19:23.975366  119309 start.go:256] writing updated cluster config ...
	I1101 11:19:23.975782  119309 ssh_runner.go:195] Run: rm -f paused
	I1101 11:19:24.029738  119309 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 11:19:24.031224  119309 out.go:179] * Done! kubectl is now configured to use "newest-cni-268638" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 11:37:18 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:37:18.094338613Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761997038094311051,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a3dbf4d2-5797-42d3-9cee-daa48175a872 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 11:37:18 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:37:18.095028230Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ee5f1ef1-9afa-445e-b71a-23d678bb4358 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:37:18 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:37:18.095079969Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ee5f1ef1-9afa-445e-b71a-23d678bb4358 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:37:18 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:37:18.095272962Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f0ea8b437e2182ceb375b66955a3b26e4c4498bc223cabc20689105364ed5ee6,PodSandboxId:19b5e46208120cfe549f0a75de8c3300847e0a996af8339a68d56fdc83499c4b,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1761996917017915645,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-7pccn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7f271eef-bded-49cd-b5a1-a618ebebcfcb,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:838afc0402cc44a7baca900d97efe9a53459c9a5fa48b14c8b5b7ee572673b34,PodSandboxId:f89bd170e3ab791be75c6d09b416e71992b7eea519dee363b0aea22c7bd2ed15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761995971570999757,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a047ac3-d0c4-448e-8066-5a3ccd78fcc1,},Annota
tions:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:848b80bee9de3b138a0e88b1a8f450d5c67ce43c1ecbbaa7aba66e87723fef76,PodSandboxId:5971a0ea778afc3d3b18aa4570fef30841b519925d07dcb801608158527aa33f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761995953413287885,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f85d60f-859a-4d40-83a1-8565332c1575,},Annotations:map[string]
string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04cd8b8612923325cb76652a74ef98a220acd8f7792ae9b6958de29b4f8cd712,PodSandboxId:7814f2d98f1d8d387a6e03b78381837b77ced36225ca497d63878951f14b8e52,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761995948028974803,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-drlhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fe001ab-c59d-4a12-9897-d7d2869a1af8,},Annotations:map[string]string{io.kubernetes.container.
hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e758b5eb3e8f723ce0d318b843d3147f85b6309e541bd51c85df5d1849e4490,PodSandboxId:ab68856a906dcfb0191ae4f4213a70de8a113156a0973a84df75b4ff2523aa69,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba6833007935
5e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761995940518207761,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lhjdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63b7c2eb-cdb2-4318-bef4-e95e3e478fb6,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2444a95999378aeda01a073bdc97ff16fb844a2080b86a21c1bffecb72fdd394,PodSandboxId:f89bd170e3ab791be75c6d09b416e71992b7eea519dee363b0aea22c7bd2ed15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,S
tate:CONTAINER_EXITED,CreatedAt:1761995940619379524,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a047ac3-d0c4-448e-8066-5a3ccd78fcc1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c18dcdf4c1e01b3ce82caaa2e01dcefe0853e396a33e4853e2943527579d9eba,PodSandboxId:55ed5cb514feedcb1b87714c12893e6c5a251b0eab38b0344c65fa6c32c50eed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,Creat
edAt:1761995934946012476,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-287419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d406d357930944106b6d791e1ab75f69,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2abd77441a1170ed4708a5a0dd563e79e0e5dc1e6203d71b175f2377e559dca2,PodSandboxId:284f6d643a797c1c35cd36ffba29d94d54f2afff14962e0474fc181d7f91cca0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761995934921401989,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-287419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a72692cc5e571176344fbccc16480bc9,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1dd634ce58953495186556de934f647c8cf41ade9027121ff41b5179263adfa,PodSandboxId:cc82fa51754fbc7c62bc351909c0f05ae870c91f48b701a1f3fdd01f376da5f4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917e
c0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761995934889787053,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-287419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 122d200fdb47f53d891405a2f21d715c,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8444,\"containerPort\":8444,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44e7eb14b2041650d2ce9e00e64b37f7fefc47da35b3c38bc68983dcd628e8c9,PodSandboxId:d492cdf741ca13e59796c3988730a1e7ef489e51d033a0c
e715392fe349cb57c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761995934871234089,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-287419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 605a85dc1a362c49d35893be2a427c1e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod
: 30,},},},}" file="otel-collector/interceptors.go:74" id=ee5f1ef1-9afa-445e-b71a-23d678bb4358 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:37:18 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:37:18.139905309Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1c5e66af-1495-48fb-a1b3-76efc9aab8ff name=/runtime.v1.RuntimeService/Version
	Nov 01 11:37:18 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:37:18.140050178Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1c5e66af-1495-48fb-a1b3-76efc9aab8ff name=/runtime.v1.RuntimeService/Version
	Nov 01 11:37:18 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:37:18.143328657Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e03a24ab-44a1-4b86-88c7-d85ec8f00ad3 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 11:37:18 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:37:18.145208500Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761997038145183890,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e03a24ab-44a1-4b86-88c7-d85ec8f00ad3 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 11:37:18 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:37:18.147412227Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c672092b-91fc-413c-a849-8aca74c23511 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:37:18 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:37:18.147463944Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c672092b-91fc-413c-a849-8aca74c23511 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:37:18 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:37:18.147699658Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f0ea8b437e2182ceb375b66955a3b26e4c4498bc223cabc20689105364ed5ee6,PodSandboxId:19b5e46208120cfe549f0a75de8c3300847e0a996af8339a68d56fdc83499c4b,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1761996917017915645,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-7pccn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7f271eef-bded-49cd-b5a1-a618ebebcfcb,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:838afc0402cc44a7baca900d97efe9a53459c9a5fa48b14c8b5b7ee572673b34,PodSandboxId:f89bd170e3ab791be75c6d09b416e71992b7eea519dee363b0aea22c7bd2ed15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761995971570999757,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a047ac3-d0c4-448e-8066-5a3ccd78fcc1,},Annota
tions:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:848b80bee9de3b138a0e88b1a8f450d5c67ce43c1ecbbaa7aba66e87723fef76,PodSandboxId:5971a0ea778afc3d3b18aa4570fef30841b519925d07dcb801608158527aa33f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761995953413287885,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f85d60f-859a-4d40-83a1-8565332c1575,},Annotations:map[string]
string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04cd8b8612923325cb76652a74ef98a220acd8f7792ae9b6958de29b4f8cd712,PodSandboxId:7814f2d98f1d8d387a6e03b78381837b77ced36225ca497d63878951f14b8e52,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761995948028974803,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-drlhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fe001ab-c59d-4a12-9897-d7d2869a1af8,},Annotations:map[string]string{io.kubernetes.container.
hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e758b5eb3e8f723ce0d318b843d3147f85b6309e541bd51c85df5d1849e4490,PodSandboxId:ab68856a906dcfb0191ae4f4213a70de8a113156a0973a84df75b4ff2523aa69,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba6833007935
5e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761995940518207761,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lhjdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63b7c2eb-cdb2-4318-bef4-e95e3e478fb6,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2444a95999378aeda01a073bdc97ff16fb844a2080b86a21c1bffecb72fdd394,PodSandboxId:f89bd170e3ab791be75c6d09b416e71992b7eea519dee363b0aea22c7bd2ed15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,S
tate:CONTAINER_EXITED,CreatedAt:1761995940619379524,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a047ac3-d0c4-448e-8066-5a3ccd78fcc1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c18dcdf4c1e01b3ce82caaa2e01dcefe0853e396a33e4853e2943527579d9eba,PodSandboxId:55ed5cb514feedcb1b87714c12893e6c5a251b0eab38b0344c65fa6c32c50eed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,Creat
edAt:1761995934946012476,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-287419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d406d357930944106b6d791e1ab75f69,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2abd77441a1170ed4708a5a0dd563e79e0e5dc1e6203d71b175f2377e559dca2,PodSandboxId:284f6d643a797c1c35cd36ffba29d94d54f2afff14962e0474fc181d7f91cca0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761995934921401989,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-287419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a72692cc5e571176344fbccc16480bc9,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1dd634ce58953495186556de934f647c8cf41ade9027121ff41b5179263adfa,PodSandboxId:cc82fa51754fbc7c62bc351909c0f05ae870c91f48b701a1f3fdd01f376da5f4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917e
c0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761995934889787053,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-287419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 122d200fdb47f53d891405a2f21d715c,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8444,\"containerPort\":8444,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44e7eb14b2041650d2ce9e00e64b37f7fefc47da35b3c38bc68983dcd628e8c9,PodSandboxId:d492cdf741ca13e59796c3988730a1e7ef489e51d033a0c
e715392fe349cb57c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761995934871234089,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-287419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 605a85dc1a362c49d35893be2a427c1e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod
: 30,},},},}" file="otel-collector/interceptors.go:74" id=c672092b-91fc-413c-a849-8aca74c23511 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:37:18 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:37:18.189607674Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7bac09c2-8c7a-4f5f-9a0d-9c2e4b7cfe6a name=/runtime.v1.RuntimeService/Version
	Nov 01 11:37:18 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:37:18.189703934Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7bac09c2-8c7a-4f5f-9a0d-9c2e4b7cfe6a name=/runtime.v1.RuntimeService/Version
	Nov 01 11:37:18 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:37:18.191239684Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7abf0bd5-e3a6-4d0f-bffd-43f3e49b5640 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 11:37:18 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:37:18.192338405Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761997038192311673,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7abf0bd5-e3a6-4d0f-bffd-43f3e49b5640 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 11:37:18 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:37:18.192911899Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=44becfa9-ec22-45df-9ac3-a1103dda58ec name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:37:18 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:37:18.193025701Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=44becfa9-ec22-45df-9ac3-a1103dda58ec name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:37:18 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:37:18.193243777Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f0ea8b437e2182ceb375b66955a3b26e4c4498bc223cabc20689105364ed5ee6,PodSandboxId:19b5e46208120cfe549f0a75de8c3300847e0a996af8339a68d56fdc83499c4b,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1761996917017915645,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-7pccn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7f271eef-bded-49cd-b5a1-a618ebebcfcb,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:838afc0402cc44a7baca900d97efe9a53459c9a5fa48b14c8b5b7ee572673b34,PodSandboxId:f89bd170e3ab791be75c6d09b416e71992b7eea519dee363b0aea22c7bd2ed15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761995971570999757,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a047ac3-d0c4-448e-8066-5a3ccd78fcc1,},Annota
tions:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:848b80bee9de3b138a0e88b1a8f450d5c67ce43c1ecbbaa7aba66e87723fef76,PodSandboxId:5971a0ea778afc3d3b18aa4570fef30841b519925d07dcb801608158527aa33f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761995953413287885,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f85d60f-859a-4d40-83a1-8565332c1575,},Annotations:map[string]
string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04cd8b8612923325cb76652a74ef98a220acd8f7792ae9b6958de29b4f8cd712,PodSandboxId:7814f2d98f1d8d387a6e03b78381837b77ced36225ca497d63878951f14b8e52,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761995948028974803,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-drlhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fe001ab-c59d-4a12-9897-d7d2869a1af8,},Annotations:map[string]string{io.kubernetes.container.
hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e758b5eb3e8f723ce0d318b843d3147f85b6309e541bd51c85df5d1849e4490,PodSandboxId:ab68856a906dcfb0191ae4f4213a70de8a113156a0973a84df75b4ff2523aa69,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba6833007935
5e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761995940518207761,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lhjdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63b7c2eb-cdb2-4318-bef4-e95e3e478fb6,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2444a95999378aeda01a073bdc97ff16fb844a2080b86a21c1bffecb72fdd394,PodSandboxId:f89bd170e3ab791be75c6d09b416e71992b7eea519dee363b0aea22c7bd2ed15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,S
tate:CONTAINER_EXITED,CreatedAt:1761995940619379524,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a047ac3-d0c4-448e-8066-5a3ccd78fcc1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c18dcdf4c1e01b3ce82caaa2e01dcefe0853e396a33e4853e2943527579d9eba,PodSandboxId:55ed5cb514feedcb1b87714c12893e6c5a251b0eab38b0344c65fa6c32c50eed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,Creat
edAt:1761995934946012476,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-287419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d406d357930944106b6d791e1ab75f69,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2abd77441a1170ed4708a5a0dd563e79e0e5dc1e6203d71b175f2377e559dca2,PodSandboxId:284f6d643a797c1c35cd36ffba29d94d54f2afff14962e0474fc181d7f91cca0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761995934921401989,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-287419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a72692cc5e571176344fbccc16480bc9,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1dd634ce58953495186556de934f647c8cf41ade9027121ff41b5179263adfa,PodSandboxId:cc82fa51754fbc7c62bc351909c0f05ae870c91f48b701a1f3fdd01f376da5f4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917e
c0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761995934889787053,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-287419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 122d200fdb47f53d891405a2f21d715c,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8444,\"containerPort\":8444,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44e7eb14b2041650d2ce9e00e64b37f7fefc47da35b3c38bc68983dcd628e8c9,PodSandboxId:d492cdf741ca13e59796c3988730a1e7ef489e51d033a0c
e715392fe349cb57c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761995934871234089,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-287419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 605a85dc1a362c49d35893be2a427c1e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod
: 30,},},},}" file="otel-collector/interceptors.go:74" id=44becfa9-ec22-45df-9ac3-a1103dda58ec name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:37:18 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:37:18.232113512Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0246ccdb-9bd5-4cf5-8795-75f25f7f4996 name=/runtime.v1.RuntimeService/Version
	Nov 01 11:37:18 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:37:18.232199665Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0246ccdb-9bd5-4cf5-8795-75f25f7f4996 name=/runtime.v1.RuntimeService/Version
	Nov 01 11:37:18 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:37:18.233738536Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=91ea74dc-19bd-4041-b656-cf20188ef7f0 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 11:37:18 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:37:18.234263939Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761997038234235315,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:176956,},InodesUsed:&UInt64Value{Value:65,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=91ea74dc-19bd-4041-b656-cf20188ef7f0 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 11:37:18 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:37:18.234803737Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=95b28cf3-0369-4ef9-9801-800824896438 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:37:18 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:37:18.235078903Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=95b28cf3-0369-4ef9-9801-800824896438 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 11:37:18 default-k8s-diff-port-287419 crio[889]: time="2025-11-01 11:37:18.235620413Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f0ea8b437e2182ceb375b66955a3b26e4c4498bc223cabc20689105364ed5ee6,PodSandboxId:19b5e46208120cfe549f0a75de8c3300847e0a996af8339a68d56fdc83499c4b,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:8,},Image:&ImageSpec{Image:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,State:CONTAINER_EXITED,CreatedAt:1761996917017915645,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-6ffb444bf9-7pccn,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 7f271eef-bded-49cd-b5a1-a618ebebcfcb,},Annotations:map[string]string{io.kubernetes.container.hash: c78228a5,io.k
ubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:838afc0402cc44a7baca900d97efe9a53459c9a5fa48b14c8b5b7ee572673b34,PodSandboxId:f89bd170e3ab791be75c6d09b416e71992b7eea519dee363b0aea22c7bd2ed15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761995971570999757,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a047ac3-d0c4-448e-8066-5a3ccd78fcc1,},Annota
tions:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:848b80bee9de3b138a0e88b1a8f450d5c67ce43c1ecbbaa7aba66e87723fef76,PodSandboxId:5971a0ea778afc3d3b18aa4570fef30841b519925d07dcb801608158527aa33f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761995953413287885,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3f85d60f-859a-4d40-83a1-8565332c1575,},Annotations:map[string]
string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04cd8b8612923325cb76652a74ef98a220acd8f7792ae9b6958de29b4f8cd712,PodSandboxId:7814f2d98f1d8d387a6e03b78381837b77ced36225ca497d63878951f14b8e52,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761995948028974803,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-drlhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fe001ab-c59d-4a12-9897-d7d2869a1af8,},Annotations:map[string]string{io.kubernetes.container.
hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e758b5eb3e8f723ce0d318b843d3147f85b6309e541bd51c85df5d1849e4490,PodSandboxId:ab68856a906dcfb0191ae4f4213a70de8a113156a0973a84df75b4ff2523aa69,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba6833007935
5e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761995940518207761,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lhjdx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63b7c2eb-cdb2-4318-bef4-e95e3e478fb6,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2444a95999378aeda01a073bdc97ff16fb844a2080b86a21c1bffecb72fdd394,PodSandboxId:f89bd170e3ab791be75c6d09b416e71992b7eea519dee363b0aea22c7bd2ed15,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,S
tate:CONTAINER_EXITED,CreatedAt:1761995940619379524,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a047ac3-d0c4-448e-8066-5a3ccd78fcc1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c18dcdf4c1e01b3ce82caaa2e01dcefe0853e396a33e4853e2943527579d9eba,PodSandboxId:55ed5cb514feedcb1b87714c12893e6c5a251b0eab38b0344c65fa6c32c50eed,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,Creat
edAt:1761995934946012476,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-287419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d406d357930944106b6d791e1ab75f69,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2abd77441a1170ed4708a5a0dd563e79e0e5dc1e6203d71b175f2377e559dca2,PodSandboxId:284f6d643a797c1c35cd36ffba29d94d54f2afff14962e0474fc181d7f91cca0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761995934921401989,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-287419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a72692cc5e571176344fbccc16480bc9,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e1dd634ce58953495186556de934f647c8cf41ade9027121ff41b5179263adfa,PodSandboxId:cc82fa51754fbc7c62bc351909c0f05ae870c91f48b701a1f3fdd01f376da5f4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917e
c0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761995934889787053,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-287419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 122d200fdb47f53d891405a2f21d715c,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8444,\"containerPort\":8444,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:44e7eb14b2041650d2ce9e00e64b37f7fefc47da35b3c38bc68983dcd628e8c9,PodSandboxId:d492cdf741ca13e59796c3988730a1e7ef489e51d033a0c
e715392fe349cb57c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761995934871234089,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-287419,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 605a85dc1a362c49d35893be2a427c1e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod
: 30,},},},}" file="otel-collector/interceptors.go:74" id=95b28cf3-0369-4ef9-9801-800824896438 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	f0ea8b437e218       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                      2 minutes ago       Exited              dashboard-metrics-scraper   8                   19b5e46208120       dashboard-metrics-scraper-6ffb444bf9-7pccn
	838afc0402cc4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      17 minutes ago      Running             storage-provisioner         2                   f89bd170e3ab7       storage-provisioner
	848b80bee9de3       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   18 minutes ago      Running             busybox                     1                   5971a0ea778af       busybox
	04cd8b8612923       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      18 minutes ago      Running             coredns                     1                   7814f2d98f1d8       coredns-66bc5c9577-drlhc
	2444a95999378       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Exited              storage-provisioner         1                   f89bd170e3ab7       storage-provisioner
	1e758b5eb3e8f       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      18 minutes ago      Running             kube-proxy                  1                   ab68856a906dc       kube-proxy-lhjdx
	c18dcdf4c1e01       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      18 minutes ago      Running             etcd                        1                   55ed5cb514fee       etcd-default-k8s-diff-port-287419
	2abd77441a117       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      18 minutes ago      Running             kube-scheduler              1                   284f6d643a797       kube-scheduler-default-k8s-diff-port-287419
	e1dd634ce5895       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      18 minutes ago      Running             kube-apiserver              1                   cc82fa51754fb       kube-apiserver-default-k8s-diff-port-287419
	44e7eb14b2041       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      18 minutes ago      Running             kube-controller-manager     1                   d492cdf741ca1       kube-controller-manager-default-k8s-diff-port-287419
	
	
	==> coredns [04cd8b8612923325cb76652a74ef98a220acd8f7792ae9b6958de29b4f8cd712] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48196 - 3921 "HINFO IN 3775580877941796997.6007613078252661342. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.092288418s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-287419
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-287419
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8d0f47abe6720ae55a5722df67bba0ddd12c8845
	                    minikube.k8s.io/name=default-k8s-diff-port-287419
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T11_15_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 11:15:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-287419
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 11:37:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 11:34:39 +0000   Sat, 01 Nov 2025 11:15:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 11:34:39 +0000   Sat, 01 Nov 2025 11:15:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 11:34:39 +0000   Sat, 01 Nov 2025 11:15:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 11:34:39 +0000   Sat, 01 Nov 2025 11:19:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.189
	  Hostname:    default-k8s-diff-port-287419
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 ca9e8ff862574318bf222e13a7f3b00b
	  System UUID:                ca9e8ff8-6257-4318-bf22-2e13a7f3b00b
	  Boot ID:                    7b59af70-09e5-4e21-ac3c-3c1ffa10b358
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 coredns-66bc5c9577-drlhc                                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     21m
	  kube-system                 etcd-default-k8s-diff-port-287419                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-287419             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-287419    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-lhjdx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-287419             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-746fcd58dc-zmbnr                         100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         20m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-7pccn              0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-jt94t                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (12%)  170Mi (5%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 21m                kube-proxy       
	  Normal   Starting                 18m                kube-proxy       
	  Normal   NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-287419 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    21m                kubelet          Node default-k8s-diff-port-287419 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21m                kubelet          Node default-k8s-diff-port-287419 status is now: NodeHasSufficientPID
	  Normal   Starting                 21m                kubelet          Starting kubelet.
	  Normal   NodeReady                21m                kubelet          Node default-k8s-diff-port-287419 status is now: NodeReady
	  Normal   RegisteredNode           21m                node-controller  Node default-k8s-diff-port-287419 event: Registered Node default-k8s-diff-port-287419 in Controller
	  Normal   Starting                 18m                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-287419 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-287419 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node default-k8s-diff-port-287419 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 18m                kubelet          Node default-k8s-diff-port-287419 has been rebooted, boot id: 7b59af70-09e5-4e21-ac3c-3c1ffa10b358
	  Normal   RegisteredNode           18m                node-controller  Node default-k8s-diff-port-287419 event: Registered Node default-k8s-diff-port-287419 in Controller
	
	
	==> dmesg <==
	[Nov 1 11:18] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000691] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.004376] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.734714] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000021] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.105593] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.143469] kauditd_printk_skb: 74 callbacks suppressed
	[Nov 1 11:19] kauditd_printk_skb: 196 callbacks suppressed
	[  +1.335384] kauditd_printk_skb: 176 callbacks suppressed
	[  +0.090298] kauditd_printk_skb: 141 callbacks suppressed
	[  +6.902046] kauditd_printk_skb: 38 callbacks suppressed
	[ +14.036700] kauditd_printk_skb: 49 callbacks suppressed
	[  +5.439376] kauditd_printk_skb: 5 callbacks suppressed
	[ +12.634980] kauditd_printk_skb: 6 callbacks suppressed
	[Nov 1 11:20] kauditd_printk_skb: 6 callbacks suppressed
	[Nov 1 11:22] kauditd_printk_skb: 6 callbacks suppressed
	[Nov 1 11:25] kauditd_printk_skb: 6 callbacks suppressed
	[Nov 1 11:30] kauditd_printk_skb: 6 callbacks suppressed
	[Nov 1 11:35] kauditd_printk_skb: 18 callbacks suppressed
	
	
	==> etcd [c18dcdf4c1e01b3ce82caaa2e01dcefe0853e396a33e4853e2943527579d9eba] <==
	{"level":"info","ts":"2025-11-01T11:19:04.865712Z","caller":"traceutil/trace.go:172","msg":"trace[118970669] transaction","detail":"{read_only:false; response_revision:582; number_of_response:1; }","duration":"131.155334ms","start":"2025-11-01T11:19:04.734546Z","end":"2025-11-01T11:19:04.865701Z","steps":["trace[118970669] 'process raft request'  (duration: 131.061556ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T11:19:04.865787Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"107.053475ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/metrics-server\" limit:1 ","response":"range_response_count:1 size:5183"}
	{"level":"info","ts":"2025-11-01T11:19:04.865946Z","caller":"traceutil/trace.go:172","msg":"trace[1734560141] range","detail":"{range_begin:/registry/deployments/kube-system/metrics-server; range_end:; response_count:1; response_revision:581; }","duration":"107.258666ms","start":"2025-11-01T11:19:04.758677Z","end":"2025-11-01T11:19:04.865936Z","steps":["trace[1734560141] 'agreement among raft nodes before linearized reading'  (duration: 106.959161ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T11:19:04.866197Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.92486ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/storage-provisioner\" limit:1 ","response":"range_response_count:1 size:1145"}
	{"level":"info","ts":"2025-11-01T11:19:04.866217Z","caller":"traceutil/trace.go:172","msg":"trace[722761142] range","detail":"{range_begin:/registry/clusterrolebindings/storage-provisioner; range_end:; response_count:1; response_revision:582; }","duration":"100.950342ms","start":"2025-11-01T11:19:04.765262Z","end":"2025-11-01T11:19:04.866212Z","steps":["trace[722761142] 'agreement among raft nodes before linearized reading'  (duration: 100.836427ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T11:19:11.359989Z","caller":"traceutil/trace.go:172","msg":"trace[1737303021] transaction","detail":"{read_only:false; response_revision:679; number_of_response:1; }","duration":"120.872536ms","start":"2025-11-01T11:19:11.239091Z","end":"2025-11-01T11:19:11.359963Z","steps":["trace[1737303021] 'process raft request'  (duration: 56.105618ms)","trace[1737303021] 'compare'  (duration: 64.415753ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T11:19:12.426252Z","caller":"traceutil/trace.go:172","msg":"trace[624116074] transaction","detail":"{read_only:false; response_revision:680; number_of_response:1; }","duration":"278.021397ms","start":"2025-11-01T11:19:12.148216Z","end":"2025-11-01T11:19:12.426237Z","steps":["trace[624116074] 'process raft request'  (duration: 277.900881ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T11:19:12.843999Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"353.818914ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1431188377579781163 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-287419\" mod_revision:680 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-287419\" value_size:7070 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-287419\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-01T11:19:12.846257Z","caller":"traceutil/trace.go:172","msg":"trace[627877480] transaction","detail":"{read_only:false; response_revision:681; number_of_response:1; }","duration":"399.04676ms","start":"2025-11-01T11:19:12.447194Z","end":"2025-11-01T11:19:12.846241Z","steps":["trace[627877480] 'process raft request'  (duration: 42.121384ms)","trace[627877480] 'compare'  (duration: 353.681691ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T11:19:12.846383Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T11:19:12.447175Z","time spent":"399.158958ms","remote":"127.0.0.1:38944","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":7148,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-287419\" mod_revision:680 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-287419\" value_size:7070 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-287419\" > >"}
	{"level":"warn","ts":"2025-11-01T11:19:13.277709Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"433.614055ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1431188377579781164 > lease_revoke:<id:13dc9a3f215ccc33>","response":"size:27"}
	{"level":"info","ts":"2025-11-01T11:19:13.277788Z","caller":"traceutil/trace.go:172","msg":"trace[1986476020] linearizableReadLoop","detail":"{readStateIndex:730; appliedIndex:729; }","duration":"427.999836ms","start":"2025-11-01T11:19:12.849776Z","end":"2025-11-01T11:19:13.277776Z","steps":["trace[1986476020] 'read index received'  (duration: 115.359685ms)","trace[1986476020] 'applied index is now lower than readState.Index'  (duration: 312.639454ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T11:19:13.278051Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"217.410264ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T11:19:13.278092Z","caller":"traceutil/trace.go:172","msg":"trace[2128589413] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:681; }","duration":"217.460761ms","start":"2025-11-01T11:19:13.060621Z","end":"2025-11-01T11:19:13.278082Z","steps":["trace[2128589413] 'range keys from in-memory index tree'  (duration: 217.380452ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T11:19:13.278235Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"428.447882ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-287419\" limit:1 ","response":"range_response_count:1 size:7163"}
	{"level":"info","ts":"2025-11-01T11:19:13.278266Z","caller":"traceutil/trace.go:172","msg":"trace[604866966] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-287419; range_end:; response_count:1; response_revision:681; }","duration":"428.485242ms","start":"2025-11-01T11:19:12.849772Z","end":"2025-11-01T11:19:13.278257Z","steps":["trace[604866966] 'agreement among raft nodes before linearized reading'  (duration: 428.369273ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T11:19:13.278291Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T11:19:12.849758Z","time spent":"428.525255ms","remote":"127.0.0.1:38944","response type":"/etcdserverpb.KV/Range","request count":0,"request size":74,"response count":1,"response size":7185,"request content":"key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-287419\" limit:1 "}
	{"level":"warn","ts":"2025-11-01T11:19:13.282012Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"136.43253ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T11:19:13.282177Z","caller":"traceutil/trace.go:172","msg":"trace[1391153181] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:681; }","duration":"136.598631ms","start":"2025-11-01T11:19:13.145568Z","end":"2025-11-01T11:19:13.282167Z","steps":["trace[1391153181] 'agreement among raft nodes before linearized reading'  (duration: 136.385737ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T11:28:57.056617Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1041}
	{"level":"info","ts":"2025-11-01T11:28:57.082055Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1041,"took":"24.789923ms","hash":1055481486,"current-db-size-bytes":3297280,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1343488,"current-db-size-in-use":"1.3 MB"}
	{"level":"info","ts":"2025-11-01T11:28:57.082114Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1055481486,"revision":1041,"compact-revision":-1}
	{"level":"info","ts":"2025-11-01T11:33:57.064625Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1324}
	{"level":"info","ts":"2025-11-01T11:33:57.069350Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1324,"took":"3.882561ms","hash":1592017337,"current-db-size-bytes":3297280,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1880064,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-11-01T11:33:57.069382Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1592017337,"revision":1324,"compact-revision":1041}
	
	
	==> kernel <==
	 11:37:18 up 18 min,  0 users,  load average: 0.54, 0.30, 0.23
	Linux default-k8s-diff-port-287419 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Oct 28 16:58:05 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [e1dd634ce58953495186556de934f647c8cf41ade9027121ff41b5179263adfa] <==
	E1101 11:34:00.565894       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1101 11:34:00.565943       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1101 11:34:00.565964       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1101 11:34:00.567055       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1101 11:35:00.566640       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 11:35:00.566708       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1101 11:35:00.566724       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1101 11:35:00.567688       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 11:35:00.567726       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1101 11:35:00.567768       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1101 11:37:00.567410       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 11:37:00.567670       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1101 11:37:00.567716       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1101 11:37:00.568631       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 11:37:00.568710       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1101 11:37:00.568720       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [44e7eb14b2041650d2ce9e00e64b37f7fefc47da35b3c38bc68983dcd628e8c9] <==
	I1101 11:31:03.478666       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1101 11:31:33.324808       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 11:31:33.487670       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1101 11:32:03.329805       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 11:32:03.496266       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1101 11:32:33.336104       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 11:32:33.505651       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1101 11:33:03.340945       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 11:33:03.515132       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1101 11:33:33.346698       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 11:33:33.526935       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1101 11:34:03.351986       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 11:34:03.536375       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1101 11:34:33.356802       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 11:34:33.544563       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1101 11:35:03.362658       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 11:35:03.551913       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1101 11:35:33.367165       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 11:35:33.561044       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1101 11:36:03.372889       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 11:36:03.571581       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1101 11:36:33.377729       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 11:36:33.579660       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1101 11:37:03.382882       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1101 11:37:03.587426       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [1e758b5eb3e8f723ce0d318b843d3147f85b6309e541bd51c85df5d1849e4490] <==
	I1101 11:19:01.268725       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 11:19:01.369623       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 11:19:01.369670       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.72.189"]
	E1101 11:19:01.369774       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 11:19:01.428199       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1101 11:19:01.428305       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 11:19:01.428351       1 server_linux.go:132] "Using iptables Proxier"
	I1101 11:19:01.441808       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 11:19:01.442339       1 server.go:527] "Version info" version="v1.34.1"
	I1101 11:19:01.442383       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 11:19:01.448600       1 config.go:200] "Starting service config controller"
	I1101 11:19:01.448678       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 11:19:01.448724       1 config.go:106] "Starting endpoint slice config controller"
	I1101 11:19:01.448748       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 11:19:01.448773       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 11:19:01.448787       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 11:19:01.449736       1 config.go:309] "Starting node config controller"
	I1101 11:19:01.449803       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 11:19:01.449915       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 11:19:01.552195       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 11:19:01.552230       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 11:19:01.552280       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2abd77441a1170ed4708a5a0dd563e79e0e5dc1e6203d71b175f2377e559dca2] <==
	I1101 11:18:57.948568       1 serving.go:386] Generated self-signed cert in-memory
	W1101 11:18:59.485639       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 11:18:59.485697       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 11:18:59.485712       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 11:18:59.485723       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 11:18:59.575013       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 11:18:59.575215       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 11:18:59.578747       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 11:18:59.578928       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 11:18:59.578942       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 11:18:59.578962       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 11:18:59.680094       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 11:36:31 default-k8s-diff-port-287419 kubelet[1225]: E1101 11:36:31.004151    1225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-zmbnr" podUID="ffa3dd51-bf02-44da-800d-f8d714bc1b36"
	Nov 01 11:36:34 default-k8s-diff-port-287419 kubelet[1225]: E1101 11:36:34.270296    1225 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761996994269757352  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Nov 01 11:36:34 default-k8s-diff-port-287419 kubelet[1225]: E1101 11:36:34.270317    1225 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761996994269757352  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Nov 01 11:36:40 default-k8s-diff-port-287419 kubelet[1225]: I1101 11:36:40.001121    1225 scope.go:117] "RemoveContainer" containerID="f0ea8b437e2182ceb375b66955a3b26e4c4498bc223cabc20689105364ed5ee6"
	Nov 01 11:36:40 default-k8s-diff-port-287419 kubelet[1225]: E1101 11:36:40.001329    1225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7pccn_kubernetes-dashboard(7f271eef-bded-49cd-b5a1-a618ebebcfcb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7pccn" podUID="7f271eef-bded-49cd-b5a1-a618ebebcfcb"
	Nov 01 11:36:40 default-k8s-diff-port-287419 kubelet[1225]: E1101 11:36:40.004411    1225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: copying system image from manifest list: determining manifest MIME type for docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jt94t" podUID="797f79dc-31d4-4da5-af7c-2b7c3c4d804b"
	Nov 01 11:36:44 default-k8s-diff-port-287419 kubelet[1225]: E1101 11:36:44.005126    1225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-zmbnr" podUID="ffa3dd51-bf02-44da-800d-f8d714bc1b36"
	Nov 01 11:36:44 default-k8s-diff-port-287419 kubelet[1225]: E1101 11:36:44.272474    1225 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761997004271660585  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Nov 01 11:36:44 default-k8s-diff-port-287419 kubelet[1225]: E1101 11:36:44.272497    1225 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761997004271660585  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Nov 01 11:36:51 default-k8s-diff-port-287419 kubelet[1225]: I1101 11:36:51.000556    1225 scope.go:117] "RemoveContainer" containerID="f0ea8b437e2182ceb375b66955a3b26e4c4498bc223cabc20689105364ed5ee6"
	Nov 01 11:36:51 default-k8s-diff-port-287419 kubelet[1225]: E1101 11:36:51.001128    1225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7pccn_kubernetes-dashboard(7f271eef-bded-49cd-b5a1-a618ebebcfcb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7pccn" podUID="7f271eef-bded-49cd-b5a1-a618ebebcfcb"
	Nov 01 11:36:54 default-k8s-diff-port-287419 kubelet[1225]: E1101 11:36:54.274029    1225 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761997014273390705  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Nov 01 11:36:54 default-k8s-diff-port-287419 kubelet[1225]: E1101 11:36:54.274280    1225 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761997014273390705  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Nov 01 11:36:55 default-k8s-diff-port-287419 kubelet[1225]: E1101 11:36:55.002613    1225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: copying system image from manifest list: determining manifest MIME type for docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jt94t" podUID="797f79dc-31d4-4da5-af7c-2b7c3c4d804b"
	Nov 01 11:36:57 default-k8s-diff-port-287419 kubelet[1225]: E1101 11:36:57.003129    1225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-zmbnr" podUID="ffa3dd51-bf02-44da-800d-f8d714bc1b36"
	Nov 01 11:37:03 default-k8s-diff-port-287419 kubelet[1225]: I1101 11:37:03.001142    1225 scope.go:117] "RemoveContainer" containerID="f0ea8b437e2182ceb375b66955a3b26e4c4498bc223cabc20689105364ed5ee6"
	Nov 01 11:37:03 default-k8s-diff-port-287419 kubelet[1225]: E1101 11:37:03.001337    1225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7pccn_kubernetes-dashboard(7f271eef-bded-49cd-b5a1-a618ebebcfcb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7pccn" podUID="7f271eef-bded-49cd-b5a1-a618ebebcfcb"
	Nov 01 11:37:04 default-k8s-diff-port-287419 kubelet[1225]: E1101 11:37:04.276909    1225 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761997024276371416  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Nov 01 11:37:04 default-k8s-diff-port-287419 kubelet[1225]: E1101 11:37:04.276951    1225 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761997024276371416  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Nov 01 11:37:06 default-k8s-diff-port-287419 kubelet[1225]: E1101 11:37:06.002518    1225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: copying system image from manifest list: determining manifest MIME type for docker://kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: reading manifest sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029 in docker.io/kubernetesui/dashboard: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-jt94t" podUID="797f79dc-31d4-4da5-af7c-2b7c3c4d804b"
	Nov 01 11:37:11 default-k8s-diff-port-287419 kubelet[1225]: E1101 11:37:11.003026    1225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-zmbnr" podUID="ffa3dd51-bf02-44da-800d-f8d714bc1b36"
	Nov 01 11:37:14 default-k8s-diff-port-287419 kubelet[1225]: I1101 11:37:14.005110    1225 scope.go:117] "RemoveContainer" containerID="f0ea8b437e2182ceb375b66955a3b26e4c4498bc223cabc20689105364ed5ee6"
	Nov 01 11:37:14 default-k8s-diff-port-287419 kubelet[1225]: E1101 11:37:14.005286    1225 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-7pccn_kubernetes-dashboard(7f271eef-bded-49cd-b5a1-a618ebebcfcb)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-7pccn" podUID="7f271eef-bded-49cd-b5a1-a618ebebcfcb"
	Nov 01 11:37:14 default-k8s-diff-port-287419 kubelet[1225]: E1101 11:37:14.278944    1225 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761997034278514969  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	Nov 01 11:37:14 default-k8s-diff-port-287419 kubelet[1225]: E1101 11:37:14.279014    1225 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761997034278514969  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:176956}  inodes_used:{value:65}}"
	
	
	==> storage-provisioner [2444a95999378aeda01a073bdc97ff16fb844a2080b86a21c1bffecb72fdd394] <==
	I1101 11:19:00.841344       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 11:19:30.866234       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [838afc0402cc44a7baca900d97efe9a53459c9a5fa48b14c8b5b7ee572673b34] <==
	W1101 11:36:54.651275       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:36:56.655295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:36:56.663443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:36:58.667104       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:36:58.671939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:37:00.675292       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:37:00.680127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:37:02.683994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:37:02.689976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:37:04.693711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:37:04.702951       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:37:06.706211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:37:06.711592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:37:08.715055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:37:08.719919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:37:10.723261       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:37:10.732079       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:37:12.735438       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:37:12.746346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:37:14.750136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:37:14.757219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:37:16.760987       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:37:16.765325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:37:18.769662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 11:37:18.779509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-287419 -n default-k8s-diff-port-287419
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-287419 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-zmbnr kubernetes-dashboard-855c9754f9-jt94t
helpers_test.go:282: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-287419 describe pod metrics-server-746fcd58dc-zmbnr kubernetes-dashboard-855c9754f9-jt94t
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-287419 describe pod metrics-server-746fcd58dc-zmbnr kubernetes-dashboard-855c9754f9-jt94t: exit status 1 (60.551208ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-zmbnr" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-jt94t" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context default-k8s-diff-port-287419 describe pod metrics-server-746fcd58dc-zmbnr kubernetes-dashboard-855c9754f9-jt94t: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.57s)

                                                
                                    

Test pass (289/343)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 7.21
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.16
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 3.17
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.16
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.65
22 TestOffline 103.16
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 151.03
31 TestAddons/serial/GCPAuth/Namespaces 0.15
32 TestAddons/serial/GCPAuth/FakeCredentials 10.62
35 TestAddons/parallel/Registry 61.93
36 TestAddons/parallel/RegistryCreds 0.66
38 TestAddons/parallel/InspektorGadget 6.31
39 TestAddons/parallel/MetricsServer 6.01
42 TestAddons/parallel/Headlamp 22.99
43 TestAddons/parallel/CloudSpanner 5.63
45 TestAddons/parallel/NvidiaDevicePlugin 6.73
46 TestAddons/parallel/Yakd 11.77
48 TestAddons/StoppedEnableDisable 70.95
49 TestCertOptions 67.08
50 TestCertExpiration 318.74
52 TestForceSystemdFlag 87.68
53 TestForceSystemdEnv 41.46
58 TestErrorSpam/setup 39.86
59 TestErrorSpam/start 0.35
60 TestErrorSpam/status 0.69
61 TestErrorSpam/pause 1.63
62 TestErrorSpam/unpause 1.93
63 TestErrorSpam/stop 5.22
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 82.18
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 376.22
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.08
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.32
75 TestFunctional/serial/CacheCmd/cache/add_local 2
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.19
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.65
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 40.6
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.67
86 TestFunctional/serial/LogsFileCmd 1.65
87 TestFunctional/serial/InvalidService 4.29
89 TestFunctional/parallel/ConfigCmd 0.44
91 TestFunctional/parallel/DryRun 0.22
92 TestFunctional/parallel/InternationalLanguage 0.12
93 TestFunctional/parallel/StatusCmd 0.72
98 TestFunctional/parallel/AddonsCmd 0.16
101 TestFunctional/parallel/SSHCmd 0.37
102 TestFunctional/parallel/CpCmd 1.14
103 TestFunctional/parallel/MySQL 23.48
104 TestFunctional/parallel/FileSync 0.19
105 TestFunctional/parallel/CertSync 1.15
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.4
113 TestFunctional/parallel/License 0.39
114 TestFunctional/parallel/Version/short 0.07
115 TestFunctional/parallel/Version/components 0.46
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.2
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.2
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.2
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.19
120 TestFunctional/parallel/ImageCommands/ImageBuild 3.6
121 TestFunctional/parallel/ImageCommands/Setup 1.71
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.07
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.07
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.07
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.36
135 TestFunctional/parallel/ProfileCmd/profile_list 0.34
136 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.65
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.34
138 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.15
139 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 8.99
140 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.53
141 TestFunctional/parallel/ImageCommands/ImageRemove 0.84
142 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.91
143 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.56
145 TestFunctional/parallel/MountCmd/any-port 104.84
146 TestFunctional/parallel/MountCmd/specific-port 1.22
147 TestFunctional/parallel/MountCmd/VerifyCleanup 1.05
148 TestFunctional/parallel/ServiceCmd/List 1.2
149 TestFunctional/parallel/ServiceCmd/JSONOutput 1.2
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 212.32
161 TestMultiControlPlane/serial/DeployApp 8.38
162 TestMultiControlPlane/serial/PingHostFromPods 1.36
163 TestMultiControlPlane/serial/AddWorkerNode 47.01
164 TestMultiControlPlane/serial/NodeLabels 0.09
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.7
166 TestMultiControlPlane/serial/CopyFile 11.01
167 TestMultiControlPlane/serial/StopSecondaryNode 86.42
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.54
169 TestMultiControlPlane/serial/RestartSecondaryNode 49.93
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.81
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 309.37
172 TestMultiControlPlane/serial/DeleteSecondaryNode 19.51
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.53
174 TestMultiControlPlane/serial/StopCluster 257.11
175 TestMultiControlPlane/serial/RestartCluster 100.73
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.52
177 TestMultiControlPlane/serial/AddSecondaryNode 84.46
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.72
183 TestJSONOutput/start/Command 80.22
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.77
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.66
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 6.99
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.23
211 TestMainNoArgs 0.06
212 TestMinikubeProfile 87.11
215 TestMountStart/serial/StartWithMountFirst 21.12
216 TestMountStart/serial/VerifyMountFirst 0.3
217 TestMountStart/serial/StartWithMountSecond 22.07
218 TestMountStart/serial/VerifyMountSecond 0.3
219 TestMountStart/serial/DeleteFirst 0.69
220 TestMountStart/serial/VerifyMountPostDelete 0.3
221 TestMountStart/serial/Stop 1.33
222 TestMountStart/serial/RestartStopped 21.03
223 TestMountStart/serial/VerifyMountPostStop 0.3
226 TestMultiNode/serial/FreshStart2Nodes 103.01
227 TestMultiNode/serial/DeployApp2Nodes 6.82
228 TestMultiNode/serial/PingHostFrom2Pods 0.87
229 TestMultiNode/serial/AddNode 46.4
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.47
232 TestMultiNode/serial/CopyFile 6.09
233 TestMultiNode/serial/StopNode 2.59
234 TestMultiNode/serial/StartAfterStop 45.25
235 TestMultiNode/serial/RestartKeepsNodes 306.35
236 TestMultiNode/serial/DeleteNode 2.61
237 TestMultiNode/serial/StopMultiNode 174.74
238 TestMultiNode/serial/RestartMultiNode 98.15
239 TestMultiNode/serial/ValidateNameConflict 41.42
246 TestScheduledStopUnix 110.53
250 TestRunningBinaryUpgrade 97.46
252 TestKubernetesUpgrade 159.86
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
259 TestNoKubernetes/serial/StartWithK8s 85.39
264 TestNetworkPlugins/group/false 3.48
268 TestNoKubernetes/serial/StartWithStopK8s 29.57
269 TestNoKubernetes/serial/Start 40.58
270 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
271 TestNoKubernetes/serial/ProfileList 1.19
272 TestNoKubernetes/serial/Stop 1.41
273 TestNoKubernetes/serial/StartNoArgs 42.07
274 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
275 TestStoppedBinaryUpgrade/Setup 0.57
276 TestStoppedBinaryUpgrade/Upgrade 120.65
277 TestISOImage/Setup 59.29
279 TestISOImage/Binaries/crictl 0.18
280 TestISOImage/Binaries/curl 0.18
281 TestISOImage/Binaries/docker 0.17
282 TestISOImage/Binaries/git 0.18
283 TestISOImage/Binaries/iptables 0.18
284 TestISOImage/Binaries/podman 0.17
285 TestISOImage/Binaries/rsync 0.17
286 TestISOImage/Binaries/socat 0.16
287 TestISOImage/Binaries/wget 0.17
288 TestISOImage/Binaries/VBoxControl 0.17
289 TestISOImage/Binaries/VBoxService 0.16
298 TestPause/serial/Start 59.13
300 TestStoppedBinaryUpgrade/MinikubeLogs 1.41
301 TestNetworkPlugins/group/auto/Start 97.49
302 TestNetworkPlugins/group/kindnet/Start 96.51
303 TestNetworkPlugins/group/calico/Start 101.98
304 TestNetworkPlugins/group/custom-flannel/Start 87.81
305 TestNetworkPlugins/group/auto/KubeletFlags 0.21
306 TestNetworkPlugins/group/auto/NetCatPod 12.24
307 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
308 TestNetworkPlugins/group/kindnet/KubeletFlags 0.27
309 TestNetworkPlugins/group/kindnet/NetCatPod 12.77
310 TestNetworkPlugins/group/auto/DNS 0.21
311 TestNetworkPlugins/group/auto/Localhost 0.16
312 TestNetworkPlugins/group/auto/HairPin 0.19
313 TestNetworkPlugins/group/kindnet/DNS 0.24
314 TestNetworkPlugins/group/kindnet/Localhost 0.17
315 TestNetworkPlugins/group/kindnet/HairPin 0.18
316 TestNetworkPlugins/group/enable-default-cni/Start 88.42
317 TestNetworkPlugins/group/calico/ControllerPod 6.01
318 TestNetworkPlugins/group/flannel/Start 91.37
319 TestNetworkPlugins/group/calico/KubeletFlags 0.18
320 TestNetworkPlugins/group/calico/NetCatPod 10.34
321 TestNetworkPlugins/group/calico/DNS 0.22
322 TestNetworkPlugins/group/calico/Localhost 0.18
323 TestNetworkPlugins/group/calico/HairPin 0.16
324 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
325 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.33
326 TestNetworkPlugins/group/custom-flannel/DNS 0.2
327 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
328 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
329 TestNetworkPlugins/group/bridge/Start 101.32
331 TestStartStop/group/old-k8s-version/serial/FirstStart 82.42
332 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
333 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.31
334 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
335 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
336 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
337 TestNetworkPlugins/group/flannel/ControllerPod 6.01
338 TestNetworkPlugins/group/flannel/KubeletFlags 0.19
339 TestNetworkPlugins/group/flannel/NetCatPod 10.28
341 TestStartStop/group/no-preload/serial/FirstStart 104.74
342 TestNetworkPlugins/group/flannel/DNS 0.19
343 TestNetworkPlugins/group/flannel/Localhost 0.15
344 TestNetworkPlugins/group/flannel/HairPin 0.17
346 TestStartStop/group/embed-certs/serial/FirstStart 93.64
347 TestStartStop/group/old-k8s-version/serial/DeployApp 11.38
348 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
349 TestNetworkPlugins/group/bridge/NetCatPod 11.32
350 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.8
351 TestStartStop/group/old-k8s-version/serial/Stop 82.95
352 TestNetworkPlugins/group/bridge/DNS 0.18
353 TestNetworkPlugins/group/bridge/Localhost 0.16
354 TestNetworkPlugins/group/bridge/HairPin 0.14
356 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 85.33
357 TestStartStop/group/no-preload/serial/DeployApp 10.29
358 TestStartStop/group/embed-certs/serial/DeployApp 11.3
359 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.05
360 TestStartStop/group/no-preload/serial/Stop 82.94
361 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.15
362 TestStartStop/group/old-k8s-version/serial/SecondStart 47.22
363 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.06
364 TestStartStop/group/embed-certs/serial/Stop 84.85
365 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 12.32
366 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.16
367 TestStartStop/group/default-k8s-diff-port/serial/Stop 77.17
368 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 15.01
369 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
370 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.21
371 TestStartStop/group/old-k8s-version/serial/Pause 2.6
373 TestStartStop/group/newest-cni/serial/FirstStart 48.71
374 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.17
375 TestStartStop/group/no-preload/serial/SecondStart 72.49
376 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.15
377 TestStartStop/group/embed-certs/serial/SecondStart 67.7
378 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
379 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 69.14
380 TestStartStop/group/newest-cni/serial/DeployApp 0
381 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.79
382 TestStartStop/group/newest-cni/serial/Stop 11.64
383 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
384 TestStartStop/group/newest-cni/serial/SecondStart 57.78
385 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.11
386 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 20.01
387 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.1
388 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
389 TestStartStop/group/no-preload/serial/Pause 3.62
391 TestISOImage/PersistentMounts//data 0.38
392 TestISOImage/PersistentMounts//var/lib/docker 0.19
393 TestISOImage/PersistentMounts//var/lib/cni 0.22
394 TestISOImage/PersistentMounts//var/lib/kubelet 0.21
395 TestISOImage/PersistentMounts//var/lib/minikube 0.26
396 TestISOImage/PersistentMounts//var/lib/toolbox 0.21
397 TestISOImage/PersistentMounts//var/lib/boot2docker 0.21
398 TestISOImage/eBPFSupport 0.18
399 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
401 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
402 TestStartStop/group/embed-certs/serial/Pause 2.94
403 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
404 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
405 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.21
406 TestStartStop/group/newest-cni/serial/Pause 2.45
408 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.21
409 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.52
x
+
TestDownloadOnly/v1.28.0/json-events (7.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-319914 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-319914 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (7.208836129s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (7.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1101 09:49:52.507600   73998 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1101 09:49:52.507704   73998 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-70113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-319914
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-319914: exit status 85 (71.235989ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-319914 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-319914 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:49:45
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:49:45.351694   74010 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:49:45.351936   74010 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:49:45.351944   74010 out.go:374] Setting ErrFile to fd 2...
	I1101 09:49:45.351948   74010 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:49:45.352130   74010 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-70113/.minikube/bin
	W1101 09:49:45.352254   74010 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21830-70113/.minikube/config/config.json: open /home/jenkins/minikube-integration/21830-70113/.minikube/config/config.json: no such file or directory
	I1101 09:49:45.352769   74010 out.go:368] Setting JSON to true
	I1101 09:49:45.353692   74010 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":5533,"bootTime":1761985052,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:49:45.353784   74010 start.go:143] virtualization: kvm guest
	I1101 09:49:45.356214   74010 out.go:99] [download-only-319914] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:49:45.356348   74010 notify.go:221] Checking for updates...
	W1101 09:49:45.356416   74010 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21830-70113/.minikube/cache/preloaded-tarball: no such file or directory
	I1101 09:49:45.357861   74010 out.go:171] MINIKUBE_LOCATION=21830
	I1101 09:49:45.359259   74010 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:49:45.360763   74010 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21830-70113/kubeconfig
	I1101 09:49:45.362100   74010 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-70113/.minikube
	I1101 09:49:45.363526   74010 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1101 09:49:45.366198   74010 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1101 09:49:45.366413   74010 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:49:45.400683   74010 out.go:99] Using the kvm2 driver based on user configuration
	I1101 09:49:45.400716   74010 start.go:309] selected driver: kvm2
	I1101 09:49:45.400723   74010 start.go:930] validating driver "kvm2" against <nil>
	I1101 09:49:45.401086   74010 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 09:49:45.401651   74010 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1101 09:49:45.401824   74010 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1101 09:49:45.401864   74010 cni.go:84] Creating CNI manager for ""
	I1101 09:49:45.401916   74010 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 09:49:45.401934   74010 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1101 09:49:45.401978   74010 start.go:353] cluster config:
	{Name:download-only-319914 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-319914 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:49:45.402191   74010 iso.go:125] acquiring lock: {Name:mk49d9a272bb99d336f82dfc5631a4c8ce9271c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:49:45.404049   74010 out.go:99] Downloading VM boot image ...
	I1101 09:49:45.404111   74010 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21830-70113/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso
	I1101 09:49:48.423278   74010 out.go:99] Starting "download-only-319914" primary control-plane node in "download-only-319914" cluster
	I1101 09:49:48.423330   74010 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 09:49:48.437987   74010 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1101 09:49:48.438030   74010 cache.go:59] Caching tarball of preloaded images
	I1101 09:49:48.438201   74010 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 09:49:48.439915   74010 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1101 09:49:48.439944   74010 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1101 09:49:48.465601   74010 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1101 09:49:48.465738   74010 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21830-70113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-319914 host does not exist
	  To start a cluster, run: "minikube start -p download-only-319914"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-319914
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (3.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-036288 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-036288 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (3.171977513s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (3.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1101 09:49:56.056338   73998 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1101 09:49:56.056374   73998 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21830-70113/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-036288
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-036288: exit status 85 (72.374183ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-319914 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-319914 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ delete  │ -p download-only-319914                                                                                                                                                 │ download-only-319914 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │ 01 Nov 25 09:49 UTC │
	│ start   │ -o=json --download-only -p download-only-036288 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-036288 │ jenkins │ v1.37.0 │ 01 Nov 25 09:49 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:49:52
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:49:52.935966   74203 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:49:52.936233   74203 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:49:52.936244   74203 out.go:374] Setting ErrFile to fd 2...
	I1101 09:49:52.936249   74203 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:49:52.936430   74203 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-70113/.minikube/bin
	I1101 09:49:52.936875   74203 out.go:368] Setting JSON to true
	I1101 09:49:52.937751   74203 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":5541,"bootTime":1761985052,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:49:52.937840   74203 start.go:143] virtualization: kvm guest
	I1101 09:49:52.939648   74203 out.go:99] [download-only-036288] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:49:52.939799   74203 notify.go:221] Checking for updates...
	I1101 09:49:52.940928   74203 out.go:171] MINIKUBE_LOCATION=21830
	I1101 09:49:52.942187   74203 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:49:52.943363   74203 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21830-70113/kubeconfig
	I1101 09:49:52.944465   74203 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-70113/.minikube
	I1101 09:49:52.945588   74203 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-036288 host does not exist
	  To start a cluster, run: "minikube start -p download-only-036288"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-036288
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.65s)

                                                
                                                
=== RUN   TestBinaryMirror
I1101 09:49:56.716289   73998 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-623089 --alsologtostderr --binary-mirror http://127.0.0.1:33603 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-623089" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-623089
--- PASS: TestBinaryMirror (0.65s)

                                                
                                    
x
+
TestOffline (103.16s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-017229 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-017229 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m42.217864149s)
helpers_test.go:175: Cleaning up "offline-crio-017229" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-017229
--- PASS: TestOffline (103.16s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-086339
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-086339: exit status 85 (64.414805ms)

                                                
                                                
-- stdout --
	* Profile "addons-086339" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-086339"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-086339
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-086339: exit status 85 (64.332021ms)

                                                
                                                
-- stdout --
	* Profile "addons-086339" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-086339"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (151.03s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-086339 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-086339 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m31.026749917s)
--- PASS: TestAddons/Setup (151.03s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-086339 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-086339 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.62s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-086339 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-086339 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [bd0f0b90-ebd1-434e-86db-7717f59bb0b2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [bd0f0b90-ebd1-434e-86db-7717f59bb0b2] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.005473218s
addons_test.go:694: (dbg) Run:  kubectl --context addons-086339 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-086339 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-086339 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.62s)

                                                
                                    
x
+
TestAddons/parallel/Registry (61.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 11.800612ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-8zvc5" [23d65f21-71d0-4da4-8f2f-5b59f93f9085] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.008044866s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-4p4n9" [73d260fc-8c68-439c-a460-208cdb29b271] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.008509984s
addons_test.go:392: (dbg) Run:  kubectl --context addons-086339 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-086339 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-086339 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (50.086723352s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-086339 ip
2025/11/01 09:53:49 [DEBUG] GET http://192.168.39.58:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-086339 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (61.93s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.66s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.295406ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-086339
addons_test.go:332: (dbg) Run:  kubectl --context addons-086339 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-086339 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.66s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.31s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-p2brt" [7d9684ff-4d35-4cab-b655-c3fcbbfaa552] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005172933s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-086339 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.31s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 11.49127ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-6lx9r" [c4e44e90-7d77-43fc-913f-f26877e52760] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005643923s
addons_test.go:463: (dbg) Run:  kubectl --context addons-086339 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-086339 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.01s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (22.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-086339 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-086339 --alsologtostderr -v=1: (1.144559964s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-299wg" [0d2a4629-f76b-4e9a-a57e-81196dfa7f91] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-299wg" [0d2a4629-f76b-4e9a-a57e-81196dfa7f91] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 16.004422132s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-086339 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-086339 addons disable headlamp --alsologtostderr -v=1: (5.840737637s)
--- PASS: TestAddons/parallel/Headlamp (22.99s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-2qnnc" [22dd45d0-38f0-434e-b76c-1c7a0c7be5f3] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004726645s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-086339 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.63s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.73s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-jh2xq" [0a9234e2-8d6a-4110-86be-ff05f9be1a29] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.009141995s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-086339 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.73s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-q9wk5" [ff034bdd-3741-4e29-8650-3f71b3f05989] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004924369s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-086339 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-086339 addons disable yakd --alsologtostderr -v=1: (5.76820378s)
--- PASS: TestAddons/parallel/Yakd (11.77s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (70.95s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-086339
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-086339: (1m10.74471048s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-086339
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-086339
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-086339
--- PASS: TestAddons/StoppedEnableDisable (70.95s)

                                                
                                    
x
+
TestCertOptions (67.08s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-970426 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-970426 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m5.835774323s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-970426 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-970426 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-970426 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-970426" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-970426
--- PASS: TestCertOptions (67.08s)

                                                
                                    
x
+
TestCertExpiration (318.74s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-917729 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-917729 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m33.572935409s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-917729 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-917729 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (44.260333663s)
helpers_test.go:175: Cleaning up "cert-expiration-917729" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-917729
--- PASS: TestCertExpiration (318.74s)

                                                
                                    
x
+
TestForceSystemdFlag (87.68s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-604567 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-604567 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m26.62743339s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-604567 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-604567" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-604567
--- PASS: TestForceSystemdFlag (87.68s)

                                                
                                    
x
+
TestForceSystemdEnv (41.46s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-297549 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-297549 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (40.574089033s)
helpers_test.go:175: Cleaning up "force-systemd-env-297549" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-297549
--- PASS: TestForceSystemdEnv (41.46s)

                                                
                                    
x
+
TestErrorSpam/setup (39.86s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-822818 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-822818 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-822818 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-822818 --driver=kvm2  --container-runtime=crio: (39.864251447s)
--- PASS: TestErrorSpam/setup (39.86s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-822818 --log_dir /tmp/nospam-822818 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-822818 --log_dir /tmp/nospam-822818 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-822818 --log_dir /tmp/nospam-822818 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-822818 --log_dir /tmp/nospam-822818 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-822818 --log_dir /tmp/nospam-822818 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-822818 --log_dir /tmp/nospam-822818 status
--- PASS: TestErrorSpam/status (0.69s)

                                                
                                    
x
+
TestErrorSpam/pause (1.63s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-822818 --log_dir /tmp/nospam-822818 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-822818 --log_dir /tmp/nospam-822818 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-822818 --log_dir /tmp/nospam-822818 pause
--- PASS: TestErrorSpam/pause (1.63s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.93s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-822818 --log_dir /tmp/nospam-822818 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-822818 --log_dir /tmp/nospam-822818 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-822818 --log_dir /tmp/nospam-822818 unpause
--- PASS: TestErrorSpam/unpause (1.93s)

                                                
                                    
x
+
TestErrorSpam/stop (5.22s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-822818 --log_dir /tmp/nospam-822818 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-822818 --log_dir /tmp/nospam-822818 stop: (2.031319183s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-822818 --log_dir /tmp/nospam-822818 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-822818 --log_dir /tmp/nospam-822818 stop: (1.409438879s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-822818 --log_dir /tmp/nospam-822818 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-822818 --log_dir /tmp/nospam-822818 stop: (1.782491152s)
--- PASS: TestErrorSpam/stop (5.22s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21830-70113/.minikube/files/etc/test/nested/copy/73998/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (82.18s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-950389 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-950389 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m22.178488454s)
--- PASS: TestFunctional/serial/StartWithProxy (82.18s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (376.22s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1101 10:04:46.900696   73998 config.go:182] Loaded profile config "functional-950389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-950389 --alsologtostderr -v=8
E1101 10:07:29.154308   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:07:29.160718   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:07:29.172148   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:07:29.193604   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:07:29.235096   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:07:29.316677   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:07:29.478869   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:07:29.800984   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:07:30.443245   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:07:31.724886   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:07:34.286690   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:07:39.408612   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:07:49.650990   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:08:10.132454   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:08:51.095477   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:10:13.016938   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-950389 --alsologtostderr -v=8: (6m16.219019908s)
functional_test.go:678: soft start took 6m16.219792479s for "functional-950389" cluster.
I1101 10:11:03.120127   73998 config.go:182] Loaded profile config "functional-950389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (376.22s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-950389 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-950389 cache add registry.k8s.io/pause:3.1: (1.086178113s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-950389 cache add registry.k8s.io/pause:3.3: (1.127261348s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-950389 cache add registry.k8s.io/pause:latest: (1.105693271s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-950389 /tmp/TestFunctionalserialCacheCmdcacheadd_local2393681230/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 cache add minikube-local-cache-test:functional-950389
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-950389 cache add minikube-local-cache-test:functional-950389: (1.651812937s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 cache delete minikube-local-cache-test:functional-950389
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-950389
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-950389 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (201.204389ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-950389 cache reload: (1.016921394s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 kubectl -- --context functional-950389 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-950389 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.6s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-950389 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-950389 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.604095623s)
functional_test.go:776: restart took 40.604232953s for "functional-950389" cluster.
I1101 10:11:51.510361   73998 config.go:182] Loaded profile config "functional-950389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (40.60s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-950389 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.67s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-950389 logs: (1.667375797s)
--- PASS: TestFunctional/serial/LogsCmd (1.67s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 logs --file /tmp/TestFunctionalserialLogsFileCmd2975947800/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-950389 logs --file /tmp/TestFunctionalserialLogsFileCmd2975947800/001/logs.txt: (1.644483781s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.65s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.29s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-950389 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-950389
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-950389: exit status 115 (242.236908ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL             │
	├───────────┼─────────────┼─────────────┼────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.40:31179 │
	└───────────┴─────────────┴─────────────┴────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-950389 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.29s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-950389 config get cpus: exit status 14 (61.853147ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-950389 config get cpus: exit status 14 (72.045928ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-950389 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-950389 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (113.714903ms)

                                                
                                                
-- stdout --
	* [functional-950389] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21830-70113/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-70113/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:12:23.179038   81989 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:12:23.179272   81989 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:12:23.179280   81989 out.go:374] Setting ErrFile to fd 2...
	I1101 10:12:23.179284   81989 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:12:23.179441   81989 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-70113/.minikube/bin
	I1101 10:12:23.179871   81989 out.go:368] Setting JSON to false
	I1101 10:12:23.180666   81989 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":6891,"bootTime":1761985052,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:12:23.180757   81989 start.go:143] virtualization: kvm guest
	I1101 10:12:23.182716   81989 out.go:179] * [functional-950389] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:12:23.184059   81989 notify.go:221] Checking for updates...
	I1101 10:12:23.184076   81989 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 10:12:23.185418   81989 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:12:23.186715   81989 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-70113/kubeconfig
	I1101 10:12:23.188037   81989 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-70113/.minikube
	I1101 10:12:23.189265   81989 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:12:23.190358   81989 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:12:23.191834   81989 config.go:182] Loaded profile config "functional-950389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:12:23.192294   81989 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:12:23.223185   81989 out.go:179] * Using the kvm2 driver based on existing profile
	I1101 10:12:23.224657   81989 start.go:309] selected driver: kvm2
	I1101 10:12:23.224681   81989 start.go:930] validating driver "kvm2" against &{Name:functional-950389 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-950389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.40 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
ntString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:12:23.224809   81989 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:12:23.226927   81989 out.go:203] 
	W1101 10:12:23.228403   81989 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1101 10:12:23.229646   81989 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-950389 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-950389 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-950389 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (117.557474ms)

                                                
                                                
-- stdout --
	* [functional-950389] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21830-70113/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-70113/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:18:11.338627   83952 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:18:11.338735   83952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:18:11.338746   83952 out.go:374] Setting ErrFile to fd 2...
	I1101 10:18:11.338753   83952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:18:11.339054   83952 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-70113/.minikube/bin
	I1101 10:18:11.339498   83952 out.go:368] Setting JSON to false
	I1101 10:18:11.340456   83952 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":7239,"bootTime":1761985052,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:18:11.340559   83952 start.go:143] virtualization: kvm guest
	I1101 10:18:11.342408   83952 out.go:179] * [functional-950389] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1101 10:18:11.343782   83952 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 10:18:11.343812   83952 notify.go:221] Checking for updates...
	I1101 10:18:11.346339   83952 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:18:11.347789   83952 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-70113/kubeconfig
	I1101 10:18:11.348964   83952 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-70113/.minikube
	I1101 10:18:11.350180   83952 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:18:11.351488   83952 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:18:11.353388   83952 config.go:182] Loaded profile config "functional-950389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:18:11.354047   83952 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:18:11.385249   83952 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1101 10:18:11.386557   83952 start.go:309] selected driver: kvm2
	I1101 10:18:11.386576   83952 start.go:930] validating driver "kvm2" against &{Name:functional-950389 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-950389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.40 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:18:11.386679   83952 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:18:11.388574   83952 out.go:203] 
	W1101 10:18:11.389749   83952 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1101 10:18:11.390833   83952 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 ssh -n functional-950389 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 cp functional-950389:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1718879492/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 ssh -n functional-950389 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 ssh -n functional-950389 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-950389 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-nnckx" [dd6e3839-b3cf-4ec6-ad2c-a1c6d778799b] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-nnckx" [dd6e3839-b3cf-4ec6-ad2c-a1c6d778799b] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.415021067s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-950389 exec mysql-5bb876957f-nnckx -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-950389 exec mysql-5bb876957f-nnckx -- mysql -ppassword -e "show databases;": exit status 1 (123.521449ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1101 10:12:20.384984   73998 retry.go:31] will retry after 900.052474ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-950389 exec mysql-5bb876957f-nnckx -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-950389 exec mysql-5bb876957f-nnckx -- mysql -ppassword -e "show databases;": exit status 1 (194.88237ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1101 10:12:21.480412   73998 retry.go:31] will retry after 1.514431842s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-950389 exec mysql-5bb876957f-nnckx -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.48s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/73998/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 ssh "sudo cat /etc/test/nested/copy/73998/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/73998.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 ssh "sudo cat /etc/ssl/certs/73998.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/73998.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 ssh "sudo cat /usr/share/ca-certificates/73998.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/739982.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 ssh "sudo cat /etc/ssl/certs/739982.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/739982.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 ssh "sudo cat /usr/share/ca-certificates/739982.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-950389 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-950389 ssh "sudo systemctl is-active docker": exit status 1 (197.500307ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-950389 ssh "sudo systemctl is-active containerd": exit status 1 (200.220127ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-950389 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-950389
localhost/kicbase/echo-server:functional-950389
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-950389 image ls --format short --alsologtostderr:
I1101 10:18:12.201166   84023 out.go:360] Setting OutFile to fd 1 ...
I1101 10:18:12.201422   84023 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:18:12.201431   84023 out.go:374] Setting ErrFile to fd 2...
I1101 10:18:12.201435   84023 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:18:12.201645   84023 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-70113/.minikube/bin
I1101 10:18:12.202188   84023 config.go:182] Loaded profile config "functional-950389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 10:18:12.202285   84023 config.go:182] Loaded profile config "functional-950389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 10:18:12.204562   84023 ssh_runner.go:195] Run: systemctl --version
I1101 10:18:12.206671   84023 main.go:143] libmachine: domain functional-950389 has defined MAC address 52:54:00:b9:b8:2f in network mk-functional-950389
I1101 10:18:12.207132   84023 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:b8:2f", ip: ""} in network mk-functional-950389: {Iface:virbr1 ExpiryTime:2025-11-01 11:03:40 +0000 UTC Type:0 Mac:52:54:00:b9:b8:2f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:functional-950389 Clientid:01:52:54:00:b9:b8:2f}
I1101 10:18:12.207159   84023 main.go:143] libmachine: domain functional-950389 has defined IP address 192.168.39.40 and MAC address 52:54:00:b9:b8:2f in network mk-functional-950389
I1101 10:18:12.207311   84023 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/functional-950389/id_rsa Username:docker}
I1101 10:18:12.293636   84023 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-950389 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/kicbase/echo-server           │ functional-950389  │ 9056ab77afb8e │ 4.94MB │
│ localhost/minikube-local-cache-test     │ functional-950389  │ c18f9d72312d9 │ 3.33kB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ localhost/my-image                      │ functional-950389  │ 7372ebf92f2e1 │ 1.47MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-950389 image ls --format table --alsologtostderr:
I1101 10:18:16.386311   84104 out.go:360] Setting OutFile to fd 1 ...
I1101 10:18:16.386577   84104 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:18:16.386585   84104 out.go:374] Setting ErrFile to fd 2...
I1101 10:18:16.386590   84104 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:18:16.386769   84104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-70113/.minikube/bin
I1101 10:18:16.387403   84104 config.go:182] Loaded profile config "functional-950389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 10:18:16.387497   84104 config.go:182] Loaded profile config "functional-950389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 10:18:16.389408   84104 ssh_runner.go:195] Run: systemctl --version
I1101 10:18:16.391402   84104 main.go:143] libmachine: domain functional-950389 has defined MAC address 52:54:00:b9:b8:2f in network mk-functional-950389
I1101 10:18:16.391766   84104 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:b8:2f", ip: ""} in network mk-functional-950389: {Iface:virbr1 ExpiryTime:2025-11-01 11:03:40 +0000 UTC Type:0 Mac:52:54:00:b9:b8:2f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:functional-950389 Clientid:01:52:54:00:b9:b8:2f}
I1101 10:18:16.391792   84104 main.go:143] libmachine: domain functional-950389 has defined IP address 192.168.39.40 and MAC address 52:54:00:b9:b8:2f in network mk-functional-950389
I1101 10:18:16.391927   84104 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/functional-950389/id_rsa Username:docker}
I1101 10:18:16.475779   84104 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-950389 image ls --format json --alsologtostderr:
[{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"7372ebf92f2e1b332fed918e81aa91c59b2a0f0206b5a6da958cb8f6cb8b3eb6","repoDigests":["localhost/my-image@sha256:c87d2e87aecf67c719e29ccdea79d7e365865986f291a0364620b58c3bba9c6c"],"repoTags":["localhost/my-image:functional-950389"],"size":"1468600"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103
547"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"e4375f24f5fde0b37200e6b200ffb4a0450250014bd18c38467bd0af640c9e24","repoDigests":
["docker.io/library/cedb6b791239dbb3f86c1386fcd7f66fea54f8b3ebdfbf9a4a30ed0f2898498a-tmp@sha256:c919b52f56efe93d7884b5e55942125b92263b5ba248db2053b6ac5c03c9c38d"],"repoTags":[],"size":"1466018"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDige
sts":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"c18f9d72312d9c27936fde38d426d8d474578fbabc695ff8ebf01c9f5dffa02d","repoDigests":["localhost/minikube-local-cache-test@sha256
:2290625fa3d873a95d4c67a3ed3902fa01ad7c7cfe448897e84c35fc81c0a274"],"repoTags":["localhost/minikube-local-cache-test:functional-950389"],"size":"3330"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e5139252
4dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-950389"],"size":"4943877"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad6197240
4e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-950389 image ls --format json --alsologtostderr:
I1101 10:18:16.186524   84094 out.go:360] Setting OutFile to fd 1 ...
I1101 10:18:16.186803   84094 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:18:16.186815   84094 out.go:374] Setting ErrFile to fd 2...
I1101 10:18:16.186821   84094 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:18:16.187040   84094 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-70113/.minikube/bin
I1101 10:18:16.187636   84094 config.go:182] Loaded profile config "functional-950389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 10:18:16.187780   84094 config.go:182] Loaded profile config "functional-950389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 10:18:16.189776   84094 ssh_runner.go:195] Run: systemctl --version
I1101 10:18:16.191895   84094 main.go:143] libmachine: domain functional-950389 has defined MAC address 52:54:00:b9:b8:2f in network mk-functional-950389
I1101 10:18:16.192277   84094 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:b8:2f", ip: ""} in network mk-functional-950389: {Iface:virbr1 ExpiryTime:2025-11-01 11:03:40 +0000 UTC Type:0 Mac:52:54:00:b9:b8:2f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:functional-950389 Clientid:01:52:54:00:b9:b8:2f}
I1101 10:18:16.192305   84094 main.go:143] libmachine: domain functional-950389 has defined IP address 192.168.39.40 and MAC address 52:54:00:b9:b8:2f in network mk-functional-950389
I1101 10:18:16.192433   84094 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/functional-950389/id_rsa Username:docker}
I1101 10:18:16.277230   84094 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-950389 image ls --format yaml --alsologtostderr:
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-950389
size: "4943877"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: c18f9d72312d9c27936fde38d426d8d474578fbabc695ff8ebf01c9f5dffa02d
repoDigests:
- localhost/minikube-local-cache-test@sha256:2290625fa3d873a95d4c67a3ed3902fa01ad7c7cfe448897e84c35fc81c0a274
repoTags:
- localhost/minikube-local-cache-test:functional-950389
size: "3330"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-950389 image ls --format yaml --alsologtostderr:
I1101 10:18:12.401159   84034 out.go:360] Setting OutFile to fd 1 ...
I1101 10:18:12.401397   84034 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:18:12.401406   84034 out.go:374] Setting ErrFile to fd 2...
I1101 10:18:12.401410   84034 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:18:12.401652   84034 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-70113/.minikube/bin
I1101 10:18:12.402274   84034 config.go:182] Loaded profile config "functional-950389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 10:18:12.402406   84034 config.go:182] Loaded profile config "functional-950389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 10:18:12.404591   84034 ssh_runner.go:195] Run: systemctl --version
I1101 10:18:12.406593   84034 main.go:143] libmachine: domain functional-950389 has defined MAC address 52:54:00:b9:b8:2f in network mk-functional-950389
I1101 10:18:12.406978   84034 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:b8:2f", ip: ""} in network mk-functional-950389: {Iface:virbr1 ExpiryTime:2025-11-01 11:03:40 +0000 UTC Type:0 Mac:52:54:00:b9:b8:2f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:functional-950389 Clientid:01:52:54:00:b9:b8:2f}
I1101 10:18:12.407003   84034 main.go:143] libmachine: domain functional-950389 has defined IP address 192.168.39.40 and MAC address 52:54:00:b9:b8:2f in network mk-functional-950389
I1101 10:18:12.407183   84034 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/functional-950389/id_rsa Username:docker}
I1101 10:18:12.489524   84034 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-950389 ssh pgrep buildkitd: exit status 1 (159.198738ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 image build -t localhost/my-image:functional-950389 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-950389 image build -t localhost/my-image:functional-950389 testdata/build --alsologtostderr: (3.225704984s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-950389 image build -t localhost/my-image:functional-950389 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> e4375f24f5f
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-950389
--> 7372ebf92f2
Successfully tagged localhost/my-image:functional-950389
7372ebf92f2e1b332fed918e81aa91c59b2a0f0206b5a6da958cb8f6cb8b3eb6
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-950389 image build -t localhost/my-image:functional-950389 testdata/build --alsologtostderr:
I1101 10:18:12.751442   84057 out.go:360] Setting OutFile to fd 1 ...
I1101 10:18:12.751700   84057 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:18:12.751715   84057 out.go:374] Setting ErrFile to fd 2...
I1101 10:18:12.751719   84057 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 10:18:12.751926   84057 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-70113/.minikube/bin
I1101 10:18:12.752550   84057 config.go:182] Loaded profile config "functional-950389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 10:18:12.753212   84057 config.go:182] Loaded profile config "functional-950389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 10:18:12.755334   84057 ssh_runner.go:195] Run: systemctl --version
I1101 10:18:12.757359   84057 main.go:143] libmachine: domain functional-950389 has defined MAC address 52:54:00:b9:b8:2f in network mk-functional-950389
I1101 10:18:12.757703   84057 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:b9:b8:2f", ip: ""} in network mk-functional-950389: {Iface:virbr1 ExpiryTime:2025-11-01 11:03:40 +0000 UTC Type:0 Mac:52:54:00:b9:b8:2f Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:functional-950389 Clientid:01:52:54:00:b9:b8:2f}
I1101 10:18:12.757729   84057 main.go:143] libmachine: domain functional-950389 has defined IP address 192.168.39.40 and MAC address 52:54:00:b9:b8:2f in network mk-functional-950389
I1101 10:18:12.757857   84057 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/functional-950389/id_rsa Username:docker}
I1101 10:18:12.842799   84057 build_images.go:162] Building image from path: /tmp/build.4023775598.tar
I1101 10:18:12.842925   84057 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1101 10:18:12.858180   84057 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4023775598.tar
I1101 10:18:12.863811   84057 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4023775598.tar: stat -c "%s %y" /var/lib/minikube/build/build.4023775598.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4023775598.tar': No such file or directory
I1101 10:18:12.863883   84057 ssh_runner.go:362] scp /tmp/build.4023775598.tar --> /var/lib/minikube/build/build.4023775598.tar (3072 bytes)
I1101 10:18:12.897707   84057 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4023775598
I1101 10:18:12.910257   84057 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4023775598 -xf /var/lib/minikube/build/build.4023775598.tar
I1101 10:18:12.922336   84057 crio.go:315] Building image: /var/lib/minikube/build/build.4023775598
I1101 10:18:12.922468   84057 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-950389 /var/lib/minikube/build/build.4023775598 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1101 10:18:15.886826   84057 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-950389 /var/lib/minikube/build/build.4023775598 --cgroup-manager=cgroupfs: (2.96432452s)
I1101 10:18:15.886905   84057 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4023775598
I1101 10:18:15.902188   84057 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4023775598.tar
I1101 10:18:15.914970   84057 build_images.go:218] Built localhost/my-image:functional-950389 from /tmp/build.4023775598.tar
I1101 10:18:15.915008   84057 build_images.go:134] succeeded building to: functional-950389
I1101 10:18:15.915014   84057 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.691892926s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-950389
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "275.68812ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "67.488254ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 image load --daemon kicbase/echo-server:functional-950389 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-950389 image load --daemon kicbase/echo-server:functional-950389 --alsologtostderr: (1.30165455s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "274.340142ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "67.174995ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 image load --daemon kicbase/echo-server:functional-950389 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-950389
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 image load --daemon kicbase/echo-server:functional-950389 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-950389 image load --daemon kicbase/echo-server:functional-950389 --alsologtostderr: (8.010366881s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 image save kicbase/echo-server:functional-950389 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 image rm kicbase/echo-server:functional-950389 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-950389
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 image save --daemon kicbase/echo-server:functional-950389 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-950389
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (104.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-950389 /tmp/TestFunctionalparallelMountCmdany-port2185038879/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1761991943345565187" to /tmp/TestFunctionalparallelMountCmdany-port2185038879/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1761991943345565187" to /tmp/TestFunctionalparallelMountCmdany-port2185038879/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1761991943345565187" to /tmp/TestFunctionalparallelMountCmdany-port2185038879/001/test-1761991943345565187
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-950389 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (156.431768ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1101 10:12:23.502286   73998 retry.go:31] will retry after 372.60019ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  1 10:12 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  1 10:12 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  1 10:12 test-1761991943345565187
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 ssh cat /mount-9p/test-1761991943345565187
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-950389 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [60978975-8366-41aa-b97a-93a1c86afe6c] Pending
helpers_test.go:352: "busybox-mount" [60978975-8366-41aa-b97a-93a1c86afe6c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E1101 10:12:29.154257   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:12:56.858529   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox-mount" [60978975-8366-41aa-b97a-93a1c86afe6c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [60978975-8366-41aa-b97a-93a1c86afe6c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 1m43.00354378s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-950389 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-950389 /tmp/TestFunctionalparallelMountCmdany-port2185038879/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (104.84s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-950389 /tmp/TestFunctionalparallelMountCmdspecific-port3461040035/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-950389 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (158.507208ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1101 10:14:08.348952   73998 retry.go:31] will retry after 375.783174ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-950389 /tmp/TestFunctionalparallelMountCmdspecific-port3461040035/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-950389 ssh "sudo umount -f /mount-9p": exit status 1 (157.991545ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-950389 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-950389 /tmp/TestFunctionalparallelMountCmdspecific-port3461040035/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-950389 /tmp/TestFunctionalparallelMountCmdVerifyCleanup996814724/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-950389 /tmp/TestFunctionalparallelMountCmdVerifyCleanup996814724/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-950389 /tmp/TestFunctionalparallelMountCmdVerifyCleanup996814724/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-950389 ssh "findmnt -T" /mount1: exit status 1 (171.221039ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1101 10:14:09.583386   73998 retry.go:31] will retry after 337.896714ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-950389 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-950389 /tmp/TestFunctionalparallelMountCmdVerifyCleanup996814724/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-950389 /tmp/TestFunctionalparallelMountCmdVerifyCleanup996814724/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-950389 /tmp/TestFunctionalparallelMountCmdVerifyCleanup996814724/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-950389 service list: (1.203573612s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-950389 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-950389 service list -o json: (1.202711714s)
functional_test.go:1504: Took "1.202799603s" to run "out/minikube-linux-amd64 -p functional-950389 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.20s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-950389
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-950389
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-950389
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (212.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1101 10:22:29.153885   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:23:52.222282   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-124269 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m31.756462943s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (212.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-124269 kubectl -- rollout status deployment/busybox: (6.047513212s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 kubectl -- exec busybox-7b57f96db7-m9qx4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 kubectl -- exec busybox-7b57f96db7-mw7dk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 kubectl -- exec busybox-7b57f96db7-pqcvn -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 kubectl -- exec busybox-7b57f96db7-m9qx4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 kubectl -- exec busybox-7b57f96db7-mw7dk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 kubectl -- exec busybox-7b57f96db7-pqcvn -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 kubectl -- exec busybox-7b57f96db7-m9qx4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 kubectl -- exec busybox-7b57f96db7-mw7dk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 kubectl -- exec busybox-7b57f96db7-pqcvn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 kubectl -- exec busybox-7b57f96db7-m9qx4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 kubectl -- exec busybox-7b57f96db7-m9qx4 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 kubectl -- exec busybox-7b57f96db7-mw7dk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 kubectl -- exec busybox-7b57f96db7-mw7dk -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 kubectl -- exec busybox-7b57f96db7-pqcvn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 kubectl -- exec busybox-7b57f96db7-pqcvn -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (47.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-124269 node add --alsologtostderr -v 5: (46.295743602s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (47.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-124269 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (11.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 cp testdata/cp-test.txt ha-124269:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 ssh -n ha-124269 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 cp ha-124269:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1817118285/001/cp-test_ha-124269.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 ssh -n ha-124269 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 cp ha-124269:/home/docker/cp-test.txt ha-124269-m02:/home/docker/cp-test_ha-124269_ha-124269-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 ssh -n ha-124269 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 ssh -n ha-124269-m02 "sudo cat /home/docker/cp-test_ha-124269_ha-124269-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 cp ha-124269:/home/docker/cp-test.txt ha-124269-m03:/home/docker/cp-test_ha-124269_ha-124269-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 ssh -n ha-124269 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 ssh -n ha-124269-m03 "sudo cat /home/docker/cp-test_ha-124269_ha-124269-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 cp ha-124269:/home/docker/cp-test.txt ha-124269-m04:/home/docker/cp-test_ha-124269_ha-124269-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 ssh -n ha-124269 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 ssh -n ha-124269-m04 "sudo cat /home/docker/cp-test_ha-124269_ha-124269-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 cp testdata/cp-test.txt ha-124269-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 ssh -n ha-124269-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 cp ha-124269-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1817118285/001/cp-test_ha-124269-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 ssh -n ha-124269-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 cp ha-124269-m02:/home/docker/cp-test.txt ha-124269:/home/docker/cp-test_ha-124269-m02_ha-124269.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 ssh -n ha-124269-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 ssh -n ha-124269 "sudo cat /home/docker/cp-test_ha-124269-m02_ha-124269.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 cp ha-124269-m02:/home/docker/cp-test.txt ha-124269-m03:/home/docker/cp-test_ha-124269-m02_ha-124269-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 ssh -n ha-124269-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 ssh -n ha-124269-m03 "sudo cat /home/docker/cp-test_ha-124269-m02_ha-124269-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 cp ha-124269-m02:/home/docker/cp-test.txt ha-124269-m04:/home/docker/cp-test_ha-124269-m02_ha-124269-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 ssh -n ha-124269-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 ssh -n ha-124269-m04 "sudo cat /home/docker/cp-test_ha-124269-m02_ha-124269-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 cp testdata/cp-test.txt ha-124269-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 ssh -n ha-124269-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 cp ha-124269-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1817118285/001/cp-test_ha-124269-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 ssh -n ha-124269-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 cp ha-124269-m03:/home/docker/cp-test.txt ha-124269:/home/docker/cp-test_ha-124269-m03_ha-124269.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 ssh -n ha-124269-m03 "sudo cat /home/docker/cp-test.txt"
E1101 10:26:59.846357   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/functional-950389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:26:59.852814   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/functional-950389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:26:59.864263   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/functional-950389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:26:59.885722   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/functional-950389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:26:59.927213   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/functional-950389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 ssh -n ha-124269 "sudo cat /home/docker/cp-test_ha-124269-m03_ha-124269.txt"
E1101 10:27:00.008514   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/functional-950389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 cp ha-124269-m03:/home/docker/cp-test.txt ha-124269-m02:/home/docker/cp-test_ha-124269-m03_ha-124269-m02.txt
E1101 10:27:00.170361   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/functional-950389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 ssh -n ha-124269-m03 "sudo cat /home/docker/cp-test.txt"
E1101 10:27:00.492030   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/functional-950389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 ssh -n ha-124269-m02 "sudo cat /home/docker/cp-test_ha-124269-m03_ha-124269-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 cp ha-124269-m03:/home/docker/cp-test.txt ha-124269-m04:/home/docker/cp-test_ha-124269-m03_ha-124269-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 ssh -n ha-124269-m03 "sudo cat /home/docker/cp-test.txt"
E1101 10:27:01.133603   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/functional-950389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 ssh -n ha-124269-m04 "sudo cat /home/docker/cp-test_ha-124269-m03_ha-124269-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 cp testdata/cp-test.txt ha-124269-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 ssh -n ha-124269-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 cp ha-124269-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1817118285/001/cp-test_ha-124269-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 ssh -n ha-124269-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 cp ha-124269-m04:/home/docker/cp-test.txt ha-124269:/home/docker/cp-test_ha-124269-m04_ha-124269.txt
E1101 10:27:02.415825   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/functional-950389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 ssh -n ha-124269-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 ssh -n ha-124269 "sudo cat /home/docker/cp-test_ha-124269-m04_ha-124269.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 cp ha-124269-m04:/home/docker/cp-test.txt ha-124269-m02:/home/docker/cp-test_ha-124269-m04_ha-124269-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 ssh -n ha-124269-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 ssh -n ha-124269-m02 "sudo cat /home/docker/cp-test_ha-124269-m04_ha-124269-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 cp ha-124269-m04:/home/docker/cp-test.txt ha-124269-m03:/home/docker/cp-test_ha-124269-m04_ha-124269-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 ssh -n ha-124269-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 ssh -n ha-124269-m03 "sudo cat /home/docker/cp-test_ha-124269-m04_ha-124269-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (11.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (86.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 node stop m02 --alsologtostderr -v 5
E1101 10:27:04.977174   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/functional-950389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:27:10.099496   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/functional-950389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:27:20.341260   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/functional-950389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:27:29.156446   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:27:40.822992   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/functional-950389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:28:21.785919   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/functional-950389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-124269 node stop m02 --alsologtostderr -v 5: (1m25.873384136s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-124269 status --alsologtostderr -v 5: exit status 7 (542.05024ms)

                                                
                                                
-- stdout --
	ha-124269
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-124269-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-124269-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-124269-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:28:29.935917   88250 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:28:29.936181   88250 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:28:29.936191   88250 out.go:374] Setting ErrFile to fd 2...
	I1101 10:28:29.936195   88250 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:28:29.936414   88250 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-70113/.minikube/bin
	I1101 10:28:29.936637   88250 out.go:368] Setting JSON to false
	I1101 10:28:29.936671   88250 mustload.go:66] Loading cluster: ha-124269
	I1101 10:28:29.936704   88250 notify.go:221] Checking for updates...
	I1101 10:28:29.937111   88250 config.go:182] Loaded profile config "ha-124269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:28:29.937130   88250 status.go:174] checking status of ha-124269 ...
	I1101 10:28:29.939231   88250 status.go:371] ha-124269 host status = "Running" (err=<nil>)
	I1101 10:28:29.939249   88250 host.go:66] Checking if "ha-124269" exists ...
	I1101 10:28:29.942087   88250 main.go:143] libmachine: domain ha-124269 has defined MAC address 52:54:00:cf:0e:b6 in network mk-ha-124269
	I1101 10:28:29.942659   88250 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cf:0e:b6", ip: ""} in network mk-ha-124269: {Iface:virbr1 ExpiryTime:2025-11-01 11:22:39 +0000 UTC Type:0 Mac:52:54:00:cf:0e:b6 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-124269 Clientid:01:52:54:00:cf:0e:b6}
	I1101 10:28:29.942690   88250 main.go:143] libmachine: domain ha-124269 has defined IP address 192.168.39.227 and MAC address 52:54:00:cf:0e:b6 in network mk-ha-124269
	I1101 10:28:29.942881   88250 host.go:66] Checking if "ha-124269" exists ...
	I1101 10:28:29.943126   88250 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:28:29.945752   88250 main.go:143] libmachine: domain ha-124269 has defined MAC address 52:54:00:cf:0e:b6 in network mk-ha-124269
	I1101 10:28:29.946192   88250 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:cf:0e:b6", ip: ""} in network mk-ha-124269: {Iface:virbr1 ExpiryTime:2025-11-01 11:22:39 +0000 UTC Type:0 Mac:52:54:00:cf:0e:b6 Iaid: IPaddr:192.168.39.227 Prefix:24 Hostname:ha-124269 Clientid:01:52:54:00:cf:0e:b6}
	I1101 10:28:29.946213   88250 main.go:143] libmachine: domain ha-124269 has defined IP address 192.168.39.227 and MAC address 52:54:00:cf:0e:b6 in network mk-ha-124269
	I1101 10:28:29.946386   88250 sshutil.go:53] new ssh client: &{IP:192.168.39.227 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/ha-124269/id_rsa Username:docker}
	I1101 10:28:30.038501   88250 ssh_runner.go:195] Run: systemctl --version
	I1101 10:28:30.046186   88250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:28:30.069213   88250 kubeconfig.go:125] found "ha-124269" server: "https://192.168.39.254:8443"
	I1101 10:28:30.069257   88250 api_server.go:166] Checking apiserver status ...
	I1101 10:28:30.069305   88250 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:28:30.093488   88250 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1425/cgroup
	W1101 10:28:30.113192   88250 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1425/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:28:30.113263   88250 ssh_runner.go:195] Run: ls
	I1101 10:28:30.120329   88250 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1101 10:28:30.128210   88250 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1101 10:28:30.128235   88250 status.go:463] ha-124269 apiserver status = Running (err=<nil>)
	I1101 10:28:30.128246   88250 status.go:176] ha-124269 status: &{Name:ha-124269 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 10:28:30.128262   88250 status.go:174] checking status of ha-124269-m02 ...
	I1101 10:28:30.130095   88250 status.go:371] ha-124269-m02 host status = "Stopped" (err=<nil>)
	I1101 10:28:30.130118   88250 status.go:384] host is not running, skipping remaining checks
	I1101 10:28:30.130127   88250 status.go:176] ha-124269-m02 status: &{Name:ha-124269-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 10:28:30.130157   88250 status.go:174] checking status of ha-124269-m03 ...
	I1101 10:28:30.131660   88250 status.go:371] ha-124269-m03 host status = "Running" (err=<nil>)
	I1101 10:28:30.131677   88250 host.go:66] Checking if "ha-124269-m03" exists ...
	I1101 10:28:30.134159   88250 main.go:143] libmachine: domain ha-124269-m03 has defined MAC address 52:54:00:3d:d5:06 in network mk-ha-124269
	I1101 10:28:30.134689   88250 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3d:d5:06", ip: ""} in network mk-ha-124269: {Iface:virbr1 ExpiryTime:2025-11-01 11:24:35 +0000 UTC Type:0 Mac:52:54:00:3d:d5:06 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-124269-m03 Clientid:01:52:54:00:3d:d5:06}
	I1101 10:28:30.134723   88250 main.go:143] libmachine: domain ha-124269-m03 has defined IP address 192.168.39.22 and MAC address 52:54:00:3d:d5:06 in network mk-ha-124269
	I1101 10:28:30.134964   88250 host.go:66] Checking if "ha-124269-m03" exists ...
	I1101 10:28:30.135243   88250 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:28:30.137988   88250 main.go:143] libmachine: domain ha-124269-m03 has defined MAC address 52:54:00:3d:d5:06 in network mk-ha-124269
	I1101 10:28:30.138523   88250 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:3d:d5:06", ip: ""} in network mk-ha-124269: {Iface:virbr1 ExpiryTime:2025-11-01 11:24:35 +0000 UTC Type:0 Mac:52:54:00:3d:d5:06 Iaid: IPaddr:192.168.39.22 Prefix:24 Hostname:ha-124269-m03 Clientid:01:52:54:00:3d:d5:06}
	I1101 10:28:30.138643   88250 main.go:143] libmachine: domain ha-124269-m03 has defined IP address 192.168.39.22 and MAC address 52:54:00:3d:d5:06 in network mk-ha-124269
	I1101 10:28:30.138883   88250 sshutil.go:53] new ssh client: &{IP:192.168.39.22 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/ha-124269-m03/id_rsa Username:docker}
	I1101 10:28:30.224915   88250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:28:30.246875   88250 kubeconfig.go:125] found "ha-124269" server: "https://192.168.39.254:8443"
	I1101 10:28:30.246916   88250 api_server.go:166] Checking apiserver status ...
	I1101 10:28:30.246962   88250 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:28:30.272064   88250 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1794/cgroup
	W1101 10:28:30.285376   88250 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1794/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:28:30.285439   88250 ssh_runner.go:195] Run: ls
	I1101 10:28:30.291592   88250 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1101 10:28:30.296587   88250 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1101 10:28:30.296623   88250 status.go:463] ha-124269-m03 apiserver status = Running (err=<nil>)
	I1101 10:28:30.296633   88250 status.go:176] ha-124269-m03 status: &{Name:ha-124269-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 10:28:30.296666   88250 status.go:174] checking status of ha-124269-m04 ...
	I1101 10:28:30.298459   88250 status.go:371] ha-124269-m04 host status = "Running" (err=<nil>)
	I1101 10:28:30.298484   88250 host.go:66] Checking if "ha-124269-m04" exists ...
	I1101 10:28:30.301421   88250 main.go:143] libmachine: domain ha-124269-m04 has defined MAC address 52:54:00:67:14:df in network mk-ha-124269
	I1101 10:28:30.302021   88250 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:67:14:df", ip: ""} in network mk-ha-124269: {Iface:virbr1 ExpiryTime:2025-11-01 11:26:22 +0000 UTC Type:0 Mac:52:54:00:67:14:df Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-124269-m04 Clientid:01:52:54:00:67:14:df}
	I1101 10:28:30.302048   88250 main.go:143] libmachine: domain ha-124269-m04 has defined IP address 192.168.39.238 and MAC address 52:54:00:67:14:df in network mk-ha-124269
	I1101 10:28:30.302226   88250 host.go:66] Checking if "ha-124269-m04" exists ...
	I1101 10:28:30.302488   88250 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:28:30.305096   88250 main.go:143] libmachine: domain ha-124269-m04 has defined MAC address 52:54:00:67:14:df in network mk-ha-124269
	I1101 10:28:30.305612   88250 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:67:14:df", ip: ""} in network mk-ha-124269: {Iface:virbr1 ExpiryTime:2025-11-01 11:26:22 +0000 UTC Type:0 Mac:52:54:00:67:14:df Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-124269-m04 Clientid:01:52:54:00:67:14:df}
	I1101 10:28:30.305641   88250 main.go:143] libmachine: domain ha-124269-m04 has defined IP address 192.168.39.238 and MAC address 52:54:00:67:14:df in network mk-ha-124269
	I1101 10:28:30.305851   88250 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/ha-124269-m04/id_rsa Username:docker}
	I1101 10:28:30.392276   88250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:28:30.413508   88250 status.go:176] ha-124269-m04 status: &{Name:ha-124269-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (86.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (49.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-124269 node start m02 --alsologtostderr -v 5: (49.144708522s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (49.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (309.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 stop --alsologtostderr -v 5
E1101 10:29:43.709236   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/functional-950389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:31:59.846834   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/functional-950389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-124269 stop --alsologtostderr -v 5: (3m3.43023346s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 start --wait true --alsologtostderr -v 5
E1101 10:32:27.550731   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/functional-950389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:32:29.154849   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-124269 start --wait true --alsologtostderr -v 5: (2m5.78878652s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (309.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (19.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-124269 node delete m03 --alsologtostderr -v 5: (18.83195475s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (19.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (257.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 stop --alsologtostderr -v 5
E1101 10:36:59.847150   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/functional-950389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:37:29.159027   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-124269 stop --alsologtostderr -v 5: (4m17.041955932s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-124269 status --alsologtostderr -v 5: exit status 7 (67.015828ms)

                                                
                                                
-- stdout --
	ha-124269
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-124269-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-124269-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:39:08.212283   91374 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:39:08.212522   91374 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:39:08.212541   91374 out.go:374] Setting ErrFile to fd 2...
	I1101 10:39:08.212545   91374 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:39:08.212755   91374 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-70113/.minikube/bin
	I1101 10:39:08.212935   91374 out.go:368] Setting JSON to false
	I1101 10:39:08.212959   91374 mustload.go:66] Loading cluster: ha-124269
	I1101 10:39:08.213003   91374 notify.go:221] Checking for updates...
	I1101 10:39:08.213325   91374 config.go:182] Loaded profile config "ha-124269": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:39:08.213339   91374 status.go:174] checking status of ha-124269 ...
	I1101 10:39:08.215374   91374 status.go:371] ha-124269 host status = "Stopped" (err=<nil>)
	I1101 10:39:08.215390   91374 status.go:384] host is not running, skipping remaining checks
	I1101 10:39:08.215395   91374 status.go:176] ha-124269 status: &{Name:ha-124269 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 10:39:08.215412   91374 status.go:174] checking status of ha-124269-m02 ...
	I1101 10:39:08.216489   91374 status.go:371] ha-124269-m02 host status = "Stopped" (err=<nil>)
	I1101 10:39:08.216501   91374 status.go:384] host is not running, skipping remaining checks
	I1101 10:39:08.216505   91374 status.go:176] ha-124269-m02 status: &{Name:ha-124269-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 10:39:08.216517   91374 status.go:174] checking status of ha-124269-m04 ...
	I1101 10:39:08.217598   91374 status.go:371] ha-124269-m04 host status = "Stopped" (err=<nil>)
	I1101 10:39:08.217612   91374 status.go:384] host is not running, skipping remaining checks
	I1101 10:39:08.217617   91374 status.go:176] ha-124269-m04 status: &{Name:ha-124269-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (257.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (100.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1101 10:40:32.223905   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-124269 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m40.095091819s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (100.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (84.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 node add --control-plane --alsologtostderr -v 5
E1101 10:41:59.846920   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/functional-950389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-124269 node add --control-plane --alsologtostderr -v 5: (1m23.751190686s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-124269 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (84.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.72s)

                                                
                                    
x
+
TestJSONOutput/start/Command (80.22s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-881911 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
E1101 10:42:29.162397   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:43:22.914300   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/functional-950389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-881911 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m20.215701395s)
--- PASS: TestJSONOutput/start/Command (80.22s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.77s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-881911 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.77s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-881911 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.99s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-881911 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-881911 --output=json --user=testUser: (6.991258165s)
--- PASS: TestJSONOutput/stop/Command (6.99s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-706361 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-706361 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (80.50979ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a6928287-0cdc-4215-9b65-cb8f421efb82","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-706361] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"172e2a70-c534-451f-8281-48e89ff4c2a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21830"}}
	{"specversion":"1.0","id":"30725705-3cb2-45d6-8312-6ee28cb5074b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"afc9f3f8-57c6-41fa-a5b6-1de68aadc255","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21830-70113/kubeconfig"}}
	{"specversion":"1.0","id":"e798f8a1-96db-49a1-9e1a-8821a8d60ec7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-70113/.minikube"}}
	{"specversion":"1.0","id":"3bbbd6ee-a080-4b35-871e-a591f60be0a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"bfb2a6ba-94bd-42ff-a559-40c022ff7f5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9515b3a1-3d5a-4a08-b5ed-e56517c3bbb2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-706361" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-706361
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (87.11s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-422207 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-422207 --driver=kvm2  --container-runtime=crio: (41.933995644s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-424429 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-424429 --driver=kvm2  --container-runtime=crio: (42.538603142s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-422207
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-424429
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-424429" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-424429
helpers_test.go:175: Cleaning up "first-422207" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-422207
--- PASS: TestMinikubeProfile (87.11s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (21.12s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-115425 --memory=3072 --mount-string /tmp/TestMountStartserial2790945142/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-115425 --memory=3072 --mount-string /tmp/TestMountStartserial2790945142/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (20.119354731s)
--- PASS: TestMountStart/serial/StartWithMountFirst (21.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-115425 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-115425 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (22.07s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-136329 --memory=3072 --mount-string /tmp/TestMountStartserial2790945142/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-136329 --memory=3072 --mount-string /tmp/TestMountStartserial2790945142/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (21.066496886s)
--- PASS: TestMountStart/serial/StartWithMountSecond (22.07s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-136329 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-136329 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-115425 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-136329 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-136329 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.30s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-136329
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-136329: (1.325518646s)
--- PASS: TestMountStart/serial/Stop (1.33s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.03s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-136329
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-136329: (20.033928166s)
--- PASS: TestMountStart/serial/RestartStopped (21.03s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-136329 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-136329 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (103.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-456313 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1101 10:46:59.847007   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/functional-950389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:47:29.154840   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-456313 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m42.660377287s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-456313 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (103.01s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-456313 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-456313 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-456313 -- rollout status deployment/busybox: (5.128122917s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-456313 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-456313 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-456313 -- exec busybox-7b57f96db7-9mp7c -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-456313 -- exec busybox-7b57f96db7-jw698 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-456313 -- exec busybox-7b57f96db7-9mp7c -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-456313 -- exec busybox-7b57f96db7-jw698 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-456313 -- exec busybox-7b57f96db7-9mp7c -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-456313 -- exec busybox-7b57f96db7-jw698 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.82s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-456313 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-456313 -- exec busybox-7b57f96db7-9mp7c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-456313 -- exec busybox-7b57f96db7-9mp7c -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-456313 -- exec busybox-7b57f96db7-jw698 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-456313 -- exec busybox-7b57f96db7-jw698 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (46.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-456313 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-456313 -v=5 --alsologtostderr: (45.94588146s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-456313 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (46.40s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-456313 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.47s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-456313 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-456313 cp testdata/cp-test.txt multinode-456313:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-456313 ssh -n multinode-456313 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-456313 cp multinode-456313:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile300491467/001/cp-test_multinode-456313.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-456313 ssh -n multinode-456313 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-456313 cp multinode-456313:/home/docker/cp-test.txt multinode-456313-m02:/home/docker/cp-test_multinode-456313_multinode-456313-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-456313 ssh -n multinode-456313 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-456313 ssh -n multinode-456313-m02 "sudo cat /home/docker/cp-test_multinode-456313_multinode-456313-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-456313 cp multinode-456313:/home/docker/cp-test.txt multinode-456313-m03:/home/docker/cp-test_multinode-456313_multinode-456313-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-456313 ssh -n multinode-456313 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-456313 ssh -n multinode-456313-m03 "sudo cat /home/docker/cp-test_multinode-456313_multinode-456313-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-456313 cp testdata/cp-test.txt multinode-456313-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-456313 ssh -n multinode-456313-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-456313 cp multinode-456313-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile300491467/001/cp-test_multinode-456313-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-456313 ssh -n multinode-456313-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-456313 cp multinode-456313-m02:/home/docker/cp-test.txt multinode-456313:/home/docker/cp-test_multinode-456313-m02_multinode-456313.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-456313 ssh -n multinode-456313-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-456313 ssh -n multinode-456313 "sudo cat /home/docker/cp-test_multinode-456313-m02_multinode-456313.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-456313 cp multinode-456313-m02:/home/docker/cp-test.txt multinode-456313-m03:/home/docker/cp-test_multinode-456313-m02_multinode-456313-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-456313 ssh -n multinode-456313-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-456313 ssh -n multinode-456313-m03 "sudo cat /home/docker/cp-test_multinode-456313-m02_multinode-456313-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-456313 cp testdata/cp-test.txt multinode-456313-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-456313 ssh -n multinode-456313-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-456313 cp multinode-456313-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile300491467/001/cp-test_multinode-456313-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-456313 ssh -n multinode-456313-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-456313 cp multinode-456313-m03:/home/docker/cp-test.txt multinode-456313:/home/docker/cp-test_multinode-456313-m03_multinode-456313.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-456313 ssh -n multinode-456313-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-456313 ssh -n multinode-456313 "sudo cat /home/docker/cp-test_multinode-456313-m03_multinode-456313.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-456313 cp multinode-456313-m03:/home/docker/cp-test.txt multinode-456313-m02:/home/docker/cp-test_multinode-456313-m03_multinode-456313-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-456313 ssh -n multinode-456313-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-456313 ssh -n multinode-456313-m02 "sudo cat /home/docker/cp-test_multinode-456313-m03_multinode-456313-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.09s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-456313 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-456313 node stop m03: (1.909993447s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-456313 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-456313 status: exit status 7 (343.657857ms)

                                                
                                                
-- stdout --
	multinode-456313
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-456313-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-456313-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-456313 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-456313 status --alsologtostderr: exit status 7 (336.404686ms)

                                                
                                                
-- stdout --
	multinode-456313
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-456313-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-456313-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:49:08.870744   96967 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:49:08.871050   96967 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:49:08.871061   96967 out.go:374] Setting ErrFile to fd 2...
	I1101 10:49:08.871067   96967 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:49:08.871347   96967 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-70113/.minikube/bin
	I1101 10:49:08.871588   96967 out.go:368] Setting JSON to false
	I1101 10:49:08.871623   96967 mustload.go:66] Loading cluster: multinode-456313
	I1101 10:49:08.871732   96967 notify.go:221] Checking for updates...
	I1101 10:49:08.872014   96967 config.go:182] Loaded profile config "multinode-456313": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:49:08.872032   96967 status.go:174] checking status of multinode-456313 ...
	I1101 10:49:08.874580   96967 status.go:371] multinode-456313 host status = "Running" (err=<nil>)
	I1101 10:49:08.874597   96967 host.go:66] Checking if "multinode-456313" exists ...
	I1101 10:49:08.876896   96967 main.go:143] libmachine: domain multinode-456313 has defined MAC address 52:54:00:aa:61:b7 in network mk-multinode-456313
	I1101 10:49:08.877288   96967 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:aa:61:b7", ip: ""} in network mk-multinode-456313: {Iface:virbr1 ExpiryTime:2025-11-01 11:46:38 +0000 UTC Type:0 Mac:52:54:00:aa:61:b7 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-456313 Clientid:01:52:54:00:aa:61:b7}
	I1101 10:49:08.877314   96967 main.go:143] libmachine: domain multinode-456313 has defined IP address 192.168.39.44 and MAC address 52:54:00:aa:61:b7 in network mk-multinode-456313
	I1101 10:49:08.877453   96967 host.go:66] Checking if "multinode-456313" exists ...
	I1101 10:49:08.877707   96967 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:49:08.879788   96967 main.go:143] libmachine: domain multinode-456313 has defined MAC address 52:54:00:aa:61:b7 in network mk-multinode-456313
	I1101 10:49:08.880143   96967 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:aa:61:b7", ip: ""} in network mk-multinode-456313: {Iface:virbr1 ExpiryTime:2025-11-01 11:46:38 +0000 UTC Type:0 Mac:52:54:00:aa:61:b7 Iaid: IPaddr:192.168.39.44 Prefix:24 Hostname:multinode-456313 Clientid:01:52:54:00:aa:61:b7}
	I1101 10:49:08.880181   96967 main.go:143] libmachine: domain multinode-456313 has defined IP address 192.168.39.44 and MAC address 52:54:00:aa:61:b7 in network mk-multinode-456313
	I1101 10:49:08.880308   96967 sshutil.go:53] new ssh client: &{IP:192.168.39.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/multinode-456313/id_rsa Username:docker}
	I1101 10:49:08.967666   96967 ssh_runner.go:195] Run: systemctl --version
	I1101 10:49:08.974221   96967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:49:08.992222   96967 kubeconfig.go:125] found "multinode-456313" server: "https://192.168.39.44:8443"
	I1101 10:49:08.992278   96967 api_server.go:166] Checking apiserver status ...
	I1101 10:49:08.992335   96967 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:49:09.012237   96967 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1379/cgroup
	W1101 10:49:09.025521   96967 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1379/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:49:09.025602   96967 ssh_runner.go:195] Run: ls
	I1101 10:49:09.030858   96967 api_server.go:253] Checking apiserver healthz at https://192.168.39.44:8443/healthz ...
	I1101 10:49:09.036247   96967 api_server.go:279] https://192.168.39.44:8443/healthz returned 200:
	ok
	I1101 10:49:09.036272   96967 status.go:463] multinode-456313 apiserver status = Running (err=<nil>)
	I1101 10:49:09.036282   96967 status.go:176] multinode-456313 status: &{Name:multinode-456313 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 10:49:09.036298   96967 status.go:174] checking status of multinode-456313-m02 ...
	I1101 10:49:09.037860   96967 status.go:371] multinode-456313-m02 host status = "Running" (err=<nil>)
	I1101 10:49:09.037881   96967 host.go:66] Checking if "multinode-456313-m02" exists ...
	I1101 10:49:09.040265   96967 main.go:143] libmachine: domain multinode-456313-m02 has defined MAC address 52:54:00:be:80:ad in network mk-multinode-456313
	I1101 10:49:09.040667   96967 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:be:80:ad", ip: ""} in network mk-multinode-456313: {Iface:virbr1 ExpiryTime:2025-11-01 11:47:35 +0000 UTC Type:0 Mac:52:54:00:be:80:ad Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:multinode-456313-m02 Clientid:01:52:54:00:be:80:ad}
	I1101 10:49:09.040690   96967 main.go:143] libmachine: domain multinode-456313-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:be:80:ad in network mk-multinode-456313
	I1101 10:49:09.040816   96967 host.go:66] Checking if "multinode-456313-m02" exists ...
	I1101 10:49:09.041060   96967 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:49:09.042944   96967 main.go:143] libmachine: domain multinode-456313-m02 has defined MAC address 52:54:00:be:80:ad in network mk-multinode-456313
	I1101 10:49:09.043337   96967 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:be:80:ad", ip: ""} in network mk-multinode-456313: {Iface:virbr1 ExpiryTime:2025-11-01 11:47:35 +0000 UTC Type:0 Mac:52:54:00:be:80:ad Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:multinode-456313-m02 Clientid:01:52:54:00:be:80:ad}
	I1101 10:49:09.043368   96967 main.go:143] libmachine: domain multinode-456313-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:be:80:ad in network mk-multinode-456313
	I1101 10:49:09.043508   96967 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21830-70113/.minikube/machines/multinode-456313-m02/id_rsa Username:docker}
	I1101 10:49:09.125087   96967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:49:09.145641   96967 status.go:176] multinode-456313-m02 status: &{Name:multinode-456313-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1101 10:49:09.145680   96967 status.go:174] checking status of multinode-456313-m03 ...
	I1101 10:49:09.147177   96967 status.go:371] multinode-456313-m03 host status = "Stopped" (err=<nil>)
	I1101 10:49:09.147198   96967 status.go:384] host is not running, skipping remaining checks
	I1101 10:49:09.147203   96967 status.go:176] multinode-456313-m03 status: &{Name:multinode-456313-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.59s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (45.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-456313 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-456313 node start m03 -v=5 --alsologtostderr: (44.74406283s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-456313 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (45.25s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (306.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-456313
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-456313
E1101 10:51:59.846951   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/functional-950389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:52:29.162518   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-456313: (2m47.51765798s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-456313 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-456313 --wait=true -v=5 --alsologtostderr: (2m18.708928295s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-456313
--- PASS: TestMultiNode/serial/RestartKeepsNodes (306.35s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-456313 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-456313 node delete m03: (2.157117323s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-456313 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.61s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (174.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-456313 stop
E1101 10:56:59.847051   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/functional-950389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:57:12.227102   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:57:29.161781   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-456313 stop: (2m54.608522104s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-456313 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-456313 status: exit status 7 (65.612695ms)

                                                
                                                
-- stdout --
	multinode-456313
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-456313-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-456313 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-456313 status --alsologtostderr: exit status 7 (70.405697ms)

                                                
                                                
-- stdout --
	multinode-456313
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-456313-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:57:58.105749   99464 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:57:58.106031   99464 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:57:58.106041   99464 out.go:374] Setting ErrFile to fd 2...
	I1101 10:57:58.106045   99464 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:57:58.106242   99464 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-70113/.minikube/bin
	I1101 10:57:58.106418   99464 out.go:368] Setting JSON to false
	I1101 10:57:58.106443   99464 mustload.go:66] Loading cluster: multinode-456313
	I1101 10:57:58.106613   99464 notify.go:221] Checking for updates...
	I1101 10:57:58.106819   99464 config.go:182] Loaded profile config "multinode-456313": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:57:58.106833   99464 status.go:174] checking status of multinode-456313 ...
	I1101 10:57:58.109001   99464 status.go:371] multinode-456313 host status = "Stopped" (err=<nil>)
	I1101 10:57:58.109018   99464 status.go:384] host is not running, skipping remaining checks
	I1101 10:57:58.109024   99464 status.go:176] multinode-456313 status: &{Name:multinode-456313 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 10:57:58.109050   99464 status.go:174] checking status of multinode-456313-m02 ...
	I1101 10:57:58.110364   99464 status.go:371] multinode-456313-m02 host status = "Stopped" (err=<nil>)
	I1101 10:57:58.110383   99464 status.go:384] host is not running, skipping remaining checks
	I1101 10:57:58.110391   99464 status.go:176] multinode-456313-m02 status: &{Name:multinode-456313-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (174.74s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (98.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-456313 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-456313 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m37.693597536s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-456313 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (98.15s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (41.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-456313
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-456313-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-456313-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (75.431857ms)

                                                
                                                
-- stdout --
	* [multinode-456313-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21830-70113/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-70113/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-456313-m02' is duplicated with machine name 'multinode-456313-m02' in profile 'multinode-456313'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-456313-m03 --driver=kvm2  --container-runtime=crio
E1101 11:00:02.918716   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/functional-950389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-456313-m03 --driver=kvm2  --container-runtime=crio: (40.241556579s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-456313
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-456313: exit status 80 (207.021769ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-456313 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-456313-m03 already exists in multinode-456313-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-456313-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (41.42s)

                                                
                                    
x
+
TestScheduledStopUnix (110.53s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-418917 --memory=3072 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-418917 --memory=3072 --driver=kvm2  --container-runtime=crio: (38.885693782s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-418917 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-418917 -n scheduled-stop-418917
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-418917 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1101 11:03:36.609706   73998 retry.go:31] will retry after 136.087µs: open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/scheduled-stop-418917/pid: no such file or directory
I1101 11:03:36.610849   73998 retry.go:31] will retry after 84.679µs: open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/scheduled-stop-418917/pid: no such file or directory
I1101 11:03:36.612024   73998 retry.go:31] will retry after 329.585µs: open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/scheduled-stop-418917/pid: no such file or directory
I1101 11:03:36.613183   73998 retry.go:31] will retry after 346.403µs: open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/scheduled-stop-418917/pid: no such file or directory
I1101 11:03:36.614306   73998 retry.go:31] will retry after 264.95µs: open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/scheduled-stop-418917/pid: no such file or directory
I1101 11:03:36.615480   73998 retry.go:31] will retry after 489.114µs: open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/scheduled-stop-418917/pid: no such file or directory
I1101 11:03:36.616652   73998 retry.go:31] will retry after 1.57051ms: open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/scheduled-stop-418917/pid: no such file or directory
I1101 11:03:36.618921   73998 retry.go:31] will retry after 2.05305ms: open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/scheduled-stop-418917/pid: no such file or directory
I1101 11:03:36.621071   73998 retry.go:31] will retry after 3.19603ms: open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/scheduled-stop-418917/pid: no such file or directory
I1101 11:03:36.625299   73998 retry.go:31] will retry after 4.056439ms: open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/scheduled-stop-418917/pid: no such file or directory
I1101 11:03:36.629459   73998 retry.go:31] will retry after 3.830698ms: open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/scheduled-stop-418917/pid: no such file or directory
I1101 11:03:36.633691   73998 retry.go:31] will retry after 12.865069ms: open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/scheduled-stop-418917/pid: no such file or directory
I1101 11:03:36.647037   73998 retry.go:31] will retry after 16.427113ms: open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/scheduled-stop-418917/pid: no such file or directory
I1101 11:03:36.663789   73998 retry.go:31] will retry after 18.400231ms: open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/scheduled-stop-418917/pid: no such file or directory
I1101 11:03:36.683124   73998 retry.go:31] will retry after 31.817035ms: open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/scheduled-stop-418917/pid: no such file or directory
I1101 11:03:36.715479   73998 retry.go:31] will retry after 32.939927ms: open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/scheduled-stop-418917/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-418917 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-418917 -n scheduled-stop-418917
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-418917
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-418917 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-418917
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-418917: exit status 7 (62.459758ms)

                                                
                                                
-- stdout --
	scheduled-stop-418917
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-418917 -n scheduled-stop-418917
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-418917 -n scheduled-stop-418917: exit status 7 (59.876148ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-418917" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-418917
--- PASS: TestScheduledStopUnix (110.53s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (97.46s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1902843666 start -p running-upgrade-768085 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1902843666 start -p running-upgrade-768085 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (52.4295318s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-768085 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-768085 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (42.291470753s)
helpers_test.go:175: Cleaning up "running-upgrade-768085" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-768085
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-768085: (2.153063387s)
--- PASS: TestRunningBinaryUpgrade (97.46s)

                                                
                                    
x
+
TestKubernetesUpgrade (159.86s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-272276 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1101 11:06:59.846499   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/functional-950389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-272276 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m7.814866392s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-272276
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-272276: (2.004453251s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-272276 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-272276 status --format={{.Host}}: exit status 7 (70.439454ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-272276 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-272276 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (35.930962496s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-272276 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-272276 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-272276 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (81.106102ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-272276] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21830-70113/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-70113/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-272276
	    minikube start -p kubernetes-upgrade-272276 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2722762 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-272276 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-272276 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-272276 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (52.881441649s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-272276" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-272276
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-272276: (1.015528072s)
--- PASS: TestKubernetesUpgrade (159.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-028702 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-028702 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (101.89678ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-028702] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21830-70113/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-70113/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (85.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-028702 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-028702 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m25.080138275s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-028702 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (85.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-216814 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-216814 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (119.510831ms)

                                                
                                                
-- stdout --
	* [false-216814] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21830
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21830-70113/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-70113/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 11:04:51.404965  103128 out.go:360] Setting OutFile to fd 1 ...
	I1101 11:04:51.405407  103128 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:04:51.405425  103128 out.go:374] Setting ErrFile to fd 2...
	I1101 11:04:51.405434  103128 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 11:04:51.405930  103128 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21830-70113/.minikube/bin
	I1101 11:04:51.406792  103128 out.go:368] Setting JSON to false
	I1101 11:04:51.407758  103128 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":10039,"bootTime":1761985052,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 11:04:51.407852  103128 start.go:143] virtualization: kvm guest
	I1101 11:04:51.410375  103128 out.go:179] * [false-216814] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 11:04:51.411937  103128 out.go:179]   - MINIKUBE_LOCATION=21830
	I1101 11:04:51.411990  103128 notify.go:221] Checking for updates...
	I1101 11:04:51.414284  103128 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 11:04:51.415525  103128 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21830-70113/kubeconfig
	I1101 11:04:51.416888  103128 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21830-70113/.minikube
	I1101 11:04:51.418160  103128 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 11:04:51.419806  103128 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 11:04:51.421681  103128 config.go:182] Loaded profile config "NoKubernetes-028702": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:04:51.421801  103128 config.go:182] Loaded profile config "force-systemd-env-297549": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:04:51.421896  103128 config.go:182] Loaded profile config "offline-crio-017229": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 11:04:51.421986  103128 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 11:04:51.458740  103128 out.go:179] * Using the kvm2 driver based on user configuration
	I1101 11:04:51.459983  103128 start.go:309] selected driver: kvm2
	I1101 11:04:51.460001  103128 start.go:930] validating driver "kvm2" against <nil>
	I1101 11:04:51.460014  103128 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 11:04:51.462274  103128 out.go:203] 
	W1101 11:04:51.463720  103128 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1101 11:04:51.465001  103128 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-216814 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-216814

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-216814

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-216814

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-216814

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-216814

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-216814

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-216814

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-216814

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-216814

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-216814

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-216814"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-216814"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-216814"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-216814

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-216814"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-216814"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-216814" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-216814" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-216814" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-216814" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-216814" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-216814" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-216814" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-216814" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-216814"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-216814"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-216814"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-216814"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-216814"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-216814" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-216814" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-216814" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-216814"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-216814"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-216814"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-216814"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-216814"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-216814

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-216814"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-216814"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-216814"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-216814"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-216814"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-216814"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-216814"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-216814"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-216814"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-216814"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-216814"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-216814"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-216814"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-216814"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-216814"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-216814"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-216814"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-216814"

                                                
                                                
----------------------- debugLogs end: false-216814 [took: 3.197671576s] --------------------------------
helpers_test.go:175: Cleaning up "false-216814" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-216814
--- PASS: TestNetworkPlugins/group/false (3.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (29.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-028702 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-028702 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (28.442568468s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-028702 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-028702 status -o json: exit status 2 (223.523746ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-028702","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-028702
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (29.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (40.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-028702 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-028702 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (40.579802473s)
--- PASS: TestNoKubernetes/serial/Start (40.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-028702 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-028702 "sudo systemctl is-active --quiet service kubelet": exit status 1 (185.975904ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-028702
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-028702: (1.40877144s)
--- PASS: TestNoKubernetes/serial/Stop (1.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (42.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-028702 --driver=kvm2  --container-runtime=crio
E1101 11:07:29.154208   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-028702 --driver=kvm2  --container-runtime=crio: (42.074417744s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (42.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-028702 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-028702 "sudo systemctl is-active --quiet service kubelet": exit status 1 (186.523344ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.57s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.57s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (120.65s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3534742736 start -p stopped-upgrade-391167 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3534742736 start -p stopped-upgrade-391167 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m0.980696674s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3534742736 -p stopped-upgrade-391167 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3534742736 -p stopped-upgrade-391167 stop: (1.819376205s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-391167 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-391167 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (57.852941258s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (120.65s)

                                                
                                    
x
+
TestISOImage/Setup (59.29s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:46: (dbg) Run:  out/minikube-linux-amd64 start -p guest-290834 --no-kubernetes --driver=kvm2  --container-runtime=crio
iso_test.go:46: (dbg) Done: out/minikube-linux-amd64 start -p guest-290834 --no-kubernetes --driver=kvm2  --container-runtime=crio: (59.292061128s)
--- PASS: TestISOImage/Setup (59.29s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-290834 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-290834 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-290834 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-290834 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-290834 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-290834 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-290834 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.16s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-290834 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.16s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-290834 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-290834 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.16s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-290834 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.16s)

                                                
                                    
x
+
TestPause/serial/Start (59.13s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-112657 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-112657 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (59.126534571s)
--- PASS: TestPause/serial/Start (59.13s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-391167
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-391167: (1.409887495s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (97.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-216814 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-216814 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m37.494421786s)
--- PASS: TestNetworkPlugins/group/auto/Start (97.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (96.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-216814 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-216814 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m36.511283693s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (96.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (101.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-216814 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-216814 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m41.978225955s)
--- PASS: TestNetworkPlugins/group/calico/Start (101.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (87.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-216814 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-216814 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m27.808981826s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (87.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-216814 "pgrep -a kubelet"
I1101 11:11:49.897345   73998 config.go:182] Loaded profile config "auto-216814": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-216814 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-s6wlv" [175f32c2-275f-4842-9124-3ecef213dc14] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-s6wlv" [175f32c2-275f-4842-9124-3ecef213dc14] Running
E1101 11:11:59.846646   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/functional-950389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.005561752s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-jknzb" [ed4fa39e-adb5-4601-8441-a7f31cfdc4ed] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00485733s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-216814 "pgrep -a kubelet"
I1101 11:12:00.127335   73998 config.go:182] Loaded profile config "kindnet-216814": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-216814 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context kindnet-216814 replace --force -f testdata/netcat-deployment.yaml: (1.362203921s)
I1101 11:12:01.740947   73998 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1101 11:12:01.872353   73998 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-tgqdl" [68def661-78db-4d7e-a91e-70ac9b529f29] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-tgqdl" [68def661-78db-4d7e-a91e-70ac9b529f29] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.007153347s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-216814 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-216814 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-216814 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-216814 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-216814 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-216814 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (88.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-216814 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-216814 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m28.422771251s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (88.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-7jcft" [6f7b1f21-fca6-4cc8-ba87-eb807ef2c6d1] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.007135787s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (91.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-216814 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-216814 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m31.367951493s)
--- PASS: TestNetworkPlugins/group/flannel/Start (91.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-216814 "pgrep -a kubelet"
I1101 11:12:32.161763   73998 config.go:182] Loaded profile config "calico-216814": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-216814 replace --force -f testdata/netcat-deployment.yaml
I1101 11:12:32.484081   73998 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8wwmd" [d3c34603-8ed1-4f22-8cab-3097d255b4de] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8wwmd" [d3c34603-8ed1-4f22-8cab-3097d255b4de] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.005546963s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-216814 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-216814 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-216814 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-216814 "pgrep -a kubelet"
I1101 11:12:44.678577   73998 config.go:182] Loaded profile config "custom-flannel-216814": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-216814 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fvq2n" [5c2c28b7-0617-489f-8fc7-dac870581c18] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-fvq2n" [5c2c28b7-0617-489f-8fc7-dac870581c18] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.008464778s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-216814 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-216814 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-216814 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (101.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-216814 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-216814 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m41.324226047s)
--- PASS: TestNetworkPlugins/group/bridge/Start (101.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (82.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-918459 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-918459 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m22.421464265s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (82.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-216814 "pgrep -a kubelet"
I1101 11:13:47.154172   73998 config.go:182] Loaded profile config "enable-default-cni-216814": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-216814 replace --force -f testdata/netcat-deployment.yaml
I1101 11:13:47.440334   73998 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-g89n2" [482ea17c-45dc-4671-a4c1-b16682663e0a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1101 11:13:52.229407   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-g89n2" [482ea17c-45dc-4671-a4c1-b16682663e0a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.005605336s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-216814 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-216814 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-216814 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-fwfz4" [9a87ef21-34b7-4ed9-aff6-396de20a57d5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006242122s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-216814 "pgrep -a kubelet"
I1101 11:14:07.310692   73998 config.go:182] Loaded profile config "flannel-216814": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-216814 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qj8xb" [c22ef9c2-a298-4ce2-b1a9-e3612d22a6c7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-qj8xb" [c22ef9c2-a298-4ce2-b1a9-e3612d22a6c7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.006252481s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (104.74s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-294319 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-294319 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m44.737168319s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (104.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-216814 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-216814 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-216814 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (93.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-571864 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-571864 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m33.640701624s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (93.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-918459 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [edd0c3d2-b4e1-4cb6-9538-19e83f322c38] Pending
helpers_test.go:352: "busybox" [edd0c3d2-b4e1-4cb6-9538-19e83f322c38] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [edd0c3d2-b4e1-4cb6-9538-19e83f322c38] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.005451663s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-918459 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-216814 "pgrep -a kubelet"
I1101 11:14:41.172057   73998 config.go:182] Loaded profile config "bridge-216814": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-216814 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hjzd2" [636c388c-411b-4cea-ba1a-aba4f1e88db7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hjzd2" [636c388c-411b-4cea-ba1a-aba4f1e88db7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004558506s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-918459 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-918459 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.706876531s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-918459 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (82.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-918459 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-918459 --alsologtostderr -v=3: (1m22.9524099s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (82.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-216814 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-216814 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-216814 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-287419 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-287419 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m25.334683666s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (85.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-294319 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d7e61f1f-0095-427f-8772-e4cb0045ba7a] Pending
helpers_test.go:352: "busybox" [d7e61f1f-0095-427f-8772-e4cb0045ba7a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d7e61f1f-0095-427f-8772-e4cb0045ba7a] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004487039s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-294319 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-571864 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [c0a4274d-10c1-43e2-afb0-759e375b89e9] Pending
helpers_test.go:352: "busybox" [c0a4274d-10c1-43e2-afb0-759e375b89e9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [c0a4274d-10c1-43e2-afb0-759e375b89e9] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.008907913s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-571864 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-294319 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-294319 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (82.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-294319 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-294319 --alsologtostderr -v=3: (1m22.938227612s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (82.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-918459 -n old-k8s-version-918459
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-918459 -n old-k8s-version-918459: exit status 7 (62.152438ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-918459 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (47.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-918459 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-918459 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (46.918330239s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-918459 -n old-k8s-version-918459
E1101 11:16:59.847225   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/functional-950389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (47.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-571864 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-571864 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (84.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-571864 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-571864 --alsologtostderr -v=3: (1m24.846853038s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (84.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-287419 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3f85d60f-859a-4d40-83a1-8565332c1575] Pending
helpers_test.go:352: "busybox" [3f85d60f-859a-4d40-83a1-8565332c1575] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3f85d60f-859a-4d40-83a1-8565332c1575] Running
E1101 11:16:42.920837   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/functional-950389/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 12.005577562s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-287419 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-287419 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-287419 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.067102268s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-287419 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (77.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-287419 --alsologtostderr -v=3
E1101 11:16:50.115262   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:16:50.121689   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:16:50.133115   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:16:50.154631   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:16:50.196084   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:16:50.277606   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:16:50.439172   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:16:50.761106   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:16:51.403132   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:16:52.685174   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:16:53.851364   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/kindnet-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:16:53.857809   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/kindnet-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:16:53.869198   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/kindnet-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:16:53.890692   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/kindnet-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:16:53.932164   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/kindnet-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:16:54.013672   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/kindnet-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:16:54.175341   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/kindnet-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:16:54.497299   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/kindnet-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:16:55.138659   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/kindnet-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:16:55.247245   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:16:56.420559   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/kindnet-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:16:58.982079   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/kindnet-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-287419 --alsologtostderr -v=3: (1m17.170165026s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (77.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (15.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-7tjg6" [5b4d5085-4775-4174-bd9c-ed6cfb1bbcdd] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1101 11:17:00.369217   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:17:04.103735   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/kindnet-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-7tjg6" [5b4d5085-4775-4174-bd9c-ed6cfb1bbcdd] Running
E1101 11:17:10.611591   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:17:14.345175   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/kindnet-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 15.004787094s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (15.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-7tjg6" [5b4d5085-4775-4174-bd9c-ed6cfb1bbcdd] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003977851s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-918459 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-918459 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-918459 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-918459 -n old-k8s-version-918459
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-918459 -n old-k8s-version-918459: exit status 2 (216.780584ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-918459 -n old-k8s-version-918459
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-918459 -n old-k8s-version-918459: exit status 2 (217.270704ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-918459 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-918459 -n old-k8s-version-918459
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-918459 -n old-k8s-version-918459
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.60s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (48.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-268638 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1101 11:17:25.972031   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/calico-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:17:25.978415   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/calico-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:17:25.989910   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/calico-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:17:26.011316   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/calico-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:17:26.053665   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/calico-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:17:26.135218   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/calico-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:17:26.296784   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/calico-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:17:26.618141   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/calico-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:17:27.259586   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/calico-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:17:28.541356   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/calico-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:17:29.154186   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/addons-086339/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:17:31.093612   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:17:31.103114   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/calico-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-268638 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (48.712219531s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (48.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-294319 -n no-preload-294319
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-294319 -n no-preload-294319: exit status 7 (84.564489ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-294319 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (72.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-294319 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1101 11:17:34.827381   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/kindnet-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:17:36.225348   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/calico-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-294319 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m11.932516727s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-294319 -n no-preload-294319
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (72.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-571864 -n embed-certs-571864
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-571864 -n embed-certs-571864: exit status 7 (61.290985ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-571864 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (67.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-571864 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1101 11:17:44.983025   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/custom-flannel-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:17:44.989391   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/custom-flannel-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:17:45.000765   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/custom-flannel-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:17:45.022329   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/custom-flannel-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:17:45.064170   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/custom-flannel-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:17:45.145741   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/custom-flannel-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:17:45.307817   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/custom-flannel-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:17:45.629752   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/custom-flannel-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:17:46.272058   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/custom-flannel-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:17:46.467663   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/calico-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:17:47.553458   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/custom-flannel-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:17:50.115359   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/custom-flannel-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:17:55.237368   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/custom-flannel-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-571864 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m7.350935152s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-571864 -n embed-certs-571864
E1101 11:18:52.558877   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/enable-default-cni-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (67.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-287419 -n default-k8s-diff-port-287419
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-287419 -n default-k8s-diff-port-287419: exit status 7 (89.267091ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-287419 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (69.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-287419 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1101 11:18:05.479167   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/custom-flannel-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:18:06.949249   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/calico-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:18:12.055612   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/auto-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-287419 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m8.797186218s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-287419 -n default-k8s-diff-port-287419
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (69.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-268638 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-268638 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.789696488s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-268638 --alsologtostderr -v=3
E1101 11:18:15.789102   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/kindnet-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:18:25.960702   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/custom-flannel-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-268638 --alsologtostderr -v=3: (11.644282577s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-268638 -n newest-cni-268638
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-268638 -n newest-cni-268638: exit status 7 (85.955978ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-268638 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (57.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-268638 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-268638 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (57.552708465s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-268638 -n newest-cni-268638
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (57.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-thpm8" [f080ee43-7d9d-412a-96c1-8c931c512186] Running
E1101 11:18:47.424604   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/enable-default-cni-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:18:47.431190   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/enable-default-cni-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:18:47.442763   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/enable-default-cni-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:18:47.464548   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/enable-default-cni-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:18:47.506045   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/enable-default-cni-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:18:47.587553   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/enable-default-cni-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:18:47.749591   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/enable-default-cni-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:18:47.910915   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/calico-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:18:48.071583   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/enable-default-cni-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:18:48.714056   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/enable-default-cni-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:18:49.996451   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/enable-default-cni-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.107439509s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (20.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7jkbb" [eb74a1a1-3757-460e-a650-1f4604873569] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7jkbb" [eb74a1a1-3757-460e-a650-1f4604873569] Running
E1101 11:19:06.922568   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/custom-flannel-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 20.004188632s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (20.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-thpm8" [f080ee43-7d9d-412a-96c1-8c931c512186] Running
E1101 11:18:57.680868   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/enable-default-cni-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005956156s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-294319 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-294319 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-294319 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-294319 --alsologtostderr -v=1: (1.146038359s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-294319 -n no-preload-294319
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-294319 -n no-preload-294319: exit status 2 (328.877702ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-294319 -n no-preload-294319
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-294319 -n no-preload-294319: exit status 2 (313.307116ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-294319 --alsologtostderr -v=1
E1101 11:19:01.114402   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/flannel-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:19:01.120890   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/flannel-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:19:01.132379   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/flannel-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:19:01.154668   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/flannel-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:19:01.196230   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/flannel-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:19:01.277732   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/flannel-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:19:01.439198   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/flannel-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:19:01.760585   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/flannel-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p no-preload-294319 --alsologtostderr -v=1: (1.067699146s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-294319 -n no-preload-294319
E1101 11:19:02.402470   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/flannel-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-294319 -n no-preload-294319
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.62s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.38s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:96: (dbg) Run:  out/minikube-linux-amd64 -p guest-290834 ssh "df -t ext4 /data | grep /data"
--- PASS: TestISOImage/PersistentMounts//data (0.38s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:96: (dbg) Run:  out/minikube-linux-amd64 -p guest-290834 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.22s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:96: (dbg) Run:  out/minikube-linux-amd64 -p guest-290834 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.22s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.21s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:96: (dbg) Run:  out/minikube-linux-amd64 -p guest-290834 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.21s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.26s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:96: (dbg) Run:  out/minikube-linux-amd64 -p guest-290834 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.26s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.21s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:96: (dbg) Run:  out/minikube-linux-amd64 -p guest-290834 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.21s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.21s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:96: (dbg) Run:  out/minikube-linux-amd64 -p guest-290834 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.21s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.18s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p guest-290834 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
--- PASS: TestISOImage/eBPFSupport (0.18s)
E1101 11:19:07.922491   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/enable-default-cni-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 11:19:11.368157   73998 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21830-70113/.minikube/profiles/flannel-216814/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7jkbb" [eb74a1a1-3757-460e-a650-1f4604873569] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004115293s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-571864 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-571864 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-571864 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-571864 -n embed-certs-571864
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-571864 -n embed-certs-571864: exit status 2 (255.936756ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-571864 -n embed-certs-571864
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-571864 -n embed-certs-571864: exit status 2 (249.165174ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-571864 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-571864 -n embed-certs-571864
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-571864 -n embed-certs-571864
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-268638 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-268638 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-268638 -n newest-cni-268638
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-268638 -n newest-cni-268638: exit status 2 (224.742385ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-268638 -n newest-cni-268638
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-268638 -n newest-cni-268638: exit status 2 (219.010629ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-268638 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-268638 -n newest-cni-268638
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-268638 -n newest-cni-268638
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-287419 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-287419 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-287419 -n default-k8s-diff-port-287419
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-287419 -n default-k8s-diff-port-287419: exit status 2 (215.675355ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-287419 -n default-k8s-diff-port-287419
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-287419 -n default-k8s-diff-port-287419: exit status 2 (211.774767ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-287419 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-287419 -n default-k8s-diff-port-287419
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-287419 -n default-k8s-diff-port-287419
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.52s)

                                                
                                    

Test skip (40/343)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.31
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
127 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
128 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
129 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
130 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestFunctionalNewestKubernetes 0
157 TestGvisorAddon 0
179 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
258 TestNetworkPlugins/group/kubenet 3.44
267 TestNetworkPlugins/group/cilium 3.76
295 TestStartStop/group/disable-driver-mounts 0.21
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.31s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-086339 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.31s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-216814 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-216814

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-216814

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-216814

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-216814

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-216814

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-216814

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-216814

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-216814

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-216814

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-216814

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-216814"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-216814"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-216814"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-216814

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-216814"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-216814"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-216814" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-216814" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-216814" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-216814" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-216814" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-216814" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-216814" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-216814" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-216814"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-216814"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-216814"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-216814"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-216814"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-216814" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-216814" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-216814" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-216814"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-216814"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-216814"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-216814"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-216814"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-216814

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-216814"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-216814"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-216814"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-216814"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-216814"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-216814"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-216814"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-216814"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-216814"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-216814"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-216814"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-216814"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-216814"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-216814"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-216814"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-216814"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-216814"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-216814"

                                                
                                                
----------------------- debugLogs end: kubenet-216814 [took: 3.283799265s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-216814" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-216814
--- SKIP: TestNetworkPlugins/group/kubenet (3.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-216814 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-216814

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-216814

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-216814

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-216814

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-216814

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-216814

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-216814

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-216814

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-216814

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-216814

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-216814"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-216814"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-216814"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-216814

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-216814"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-216814"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-216814" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-216814" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-216814" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-216814" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-216814" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-216814" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-216814" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-216814" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-216814"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-216814"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-216814"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-216814"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-216814"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-216814

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-216814

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-216814" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-216814" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-216814

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-216814

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-216814" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-216814" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-216814" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-216814" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-216814" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-216814"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-216814"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-216814"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-216814"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-216814"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-216814

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-216814"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-216814"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-216814"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-216814"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-216814"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-216814"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-216814"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-216814"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-216814"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-216814"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-216814"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-216814"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-216814"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-216814"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-216814"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-216814"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-216814"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-216814" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-216814"

                                                
                                                
----------------------- debugLogs end: cilium-216814 [took: 3.594731518s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-216814" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-216814
--- SKIP: TestNetworkPlugins/group/cilium (3.76s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-756998" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-756998
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
Copied to clipboard