Test Report: KVM_Linux_crio 21832

                    
                      e7c87104757589f66628ccdf942f4e049b607564:2025-11-01:42155
                    
                

Test fail (5/337)

Order failed test Duration
37 TestAddons/parallel/Ingress 156.43
99 TestFunctional/parallel/PersistentVolumeClaim 389.96
244 TestPreload 128.58
288 TestPause/serial/SecondStartNoReconfiguration 83.55
357 TestNetworkPlugins/group/calico/Start 928.42
x
+
TestAddons/parallel/Ingress (156.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-610936 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-610936 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-610936 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [d2369d8f-b848-4d1a-9e8f-e2845ef60291] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [d2369d8f-b848-4d1a-9e8f-e2845ef60291] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.006725127s
I1101 09:30:16.969554  348518 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-610936 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-610936 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.855729072s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-610936 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-610936 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.81
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-610936 -n addons-610936
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-610936 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-610936 logs -n 25: (1.619690025s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-662663                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-662663 │ jenkins │ v1.37.0 │ 01 Nov 25 09:26 UTC │ 01 Nov 25 09:26 UTC │
	│ start   │ --download-only -p binary-mirror-267138 --alsologtostderr --binary-mirror http://127.0.0.1:35611 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-267138 │ jenkins │ v1.37.0 │ 01 Nov 25 09:26 UTC │                     │
	│ delete  │ -p binary-mirror-267138                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-267138 │ jenkins │ v1.37.0 │ 01 Nov 25 09:26 UTC │ 01 Nov 25 09:26 UTC │
	│ addons  │ disable dashboard -p addons-610936                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-610936        │ jenkins │ v1.37.0 │ 01 Nov 25 09:26 UTC │                     │
	│ addons  │ enable dashboard -p addons-610936                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-610936        │ jenkins │ v1.37.0 │ 01 Nov 25 09:26 UTC │                     │
	│ start   │ -p addons-610936 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-610936        │ jenkins │ v1.37.0 │ 01 Nov 25 09:26 UTC │ 01 Nov 25 09:29 UTC │
	│ addons  │ addons-610936 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-610936        │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:29 UTC │
	│ addons  │ addons-610936 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-610936        │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:29 UTC │
	│ addons  │ enable headlamp -p addons-610936 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-610936        │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:29 UTC │
	│ addons  │ addons-610936 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-610936        │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:30 UTC │
	│ addons  │ addons-610936 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-610936        │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:29 UTC │
	│ addons  │ addons-610936 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-610936        │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ 01 Nov 25 09:30 UTC │
	│ ip      │ addons-610936 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-610936        │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ 01 Nov 25 09:30 UTC │
	│ addons  │ addons-610936 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-610936        │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ 01 Nov 25 09:30 UTC │
	│ addons  │ addons-610936 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-610936        │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ 01 Nov 25 09:30 UTC │
	│ ssh     │ addons-610936 ssh cat /opt/local-path-provisioner/pvc-479a1c05-a807-4c11-a5ef-bb253fe0f186_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-610936        │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ 01 Nov 25 09:30 UTC │
	│ addons  │ addons-610936 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-610936        │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ 01 Nov 25 09:30 UTC │
	│ addons  │ addons-610936 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-610936        │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ 01 Nov 25 09:30 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-610936                                                                                                                                                                                                                                                                                                                                                                                         │ addons-610936        │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ 01 Nov 25 09:30 UTC │
	│ ssh     │ addons-610936 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-610936        │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │                     │
	│ addons  │ addons-610936 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-610936        │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ 01 Nov 25 09:30 UTC │
	│ addons  │ addons-610936 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-610936        │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ 01 Nov 25 09:30 UTC │
	│ addons  │ addons-610936 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-610936        │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ addons  │ addons-610936 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-610936        │ jenkins │ v1.37.0 │ 01 Nov 25 09:31 UTC │ 01 Nov 25 09:31 UTC │
	│ ip      │ addons-610936 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-610936        │ jenkins │ v1.37.0 │ 01 Nov 25 09:32 UTC │ 01 Nov 25 09:32 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:26:48
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:26:48.167105  349088 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:26:48.167358  349088 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:26:48.167366  349088 out.go:374] Setting ErrFile to fd 2...
	I1101 09:26:48.167370  349088 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:26:48.167565  349088 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-344560/.minikube/bin
	I1101 09:26:48.168108  349088 out.go:368] Setting JSON to false
	I1101 09:26:48.169806  349088 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4156,"bootTime":1761985052,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:26:48.170059  349088 start.go:143] virtualization: kvm guest
	I1101 09:26:48.171753  349088 out.go:179] * [addons-610936] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:26:48.173165  349088 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 09:26:48.173177  349088 notify.go:221] Checking for updates...
	I1101 09:26:48.174607  349088 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:26:48.175976  349088 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-344560/kubeconfig
	I1101 09:26:48.177208  349088 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-344560/.minikube
	I1101 09:26:48.178346  349088 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:26:48.179555  349088 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:26:48.181019  349088 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:26:48.212128  349088 out.go:179] * Using the kvm2 driver based on user configuration
	I1101 09:26:48.213542  349088 start.go:309] selected driver: kvm2
	I1101 09:26:48.213561  349088 start.go:930] validating driver "kvm2" against <nil>
	I1101 09:26:48.213574  349088 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:26:48.214280  349088 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 09:26:48.214531  349088 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:26:48.214572  349088 cni.go:84] Creating CNI manager for ""
	I1101 09:26:48.214647  349088 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 09:26:48.214656  349088 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1101 09:26:48.214699  349088 start.go:353] cluster config:
	{Name:addons-610936 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-610936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1101 09:26:48.214803  349088 iso.go:125] acquiring lock: {Name:mkc74493fbbc2007c645c4ed6349cf76e7fb2185 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:26:48.217210  349088 out.go:179] * Starting "addons-610936" primary control-plane node in "addons-610936" cluster
	I1101 09:26:48.218317  349088 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:26:48.218360  349088 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21832-344560/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 09:26:48.218369  349088 cache.go:59] Caching tarball of preloaded images
	I1101 09:26:48.218474  349088 preload.go:233] Found /home/jenkins/minikube-integration/21832-344560/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 09:26:48.218485  349088 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:26:48.218827  349088 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/config.json ...
	I1101 09:26:48.218853  349088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/config.json: {Name:mk116c209680bfabd911f460b995157de8b4aa36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:26:48.219021  349088 start.go:360] acquireMachinesLock for addons-610936: {Name:mkd221a68334bc82c567a9a06c8563af1e1c38bd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 09:26:48.219068  349088 start.go:364] duration metric: took 33.124µs to acquireMachinesLock for "addons-610936"
	I1101 09:26:48.219087  349088 start.go:93] Provisioning new machine with config: &{Name:addons-610936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-610936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:26:48.219137  349088 start.go:125] createHost starting for "" (driver="kvm2")
	I1101 09:26:48.221497  349088 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1101 09:26:48.221687  349088 start.go:159] libmachine.API.Create for "addons-610936" (driver="kvm2")
	I1101 09:26:48.221717  349088 client.go:173] LocalClient.Create starting
	I1101 09:26:48.221840  349088 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/ca.pem
	I1101 09:26:48.388426  349088 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/cert.pem
	I1101 09:26:48.635134  349088 main.go:143] libmachine: creating domain...
	I1101 09:26:48.635159  349088 main.go:143] libmachine: creating network...
	I1101 09:26:48.636858  349088 main.go:143] libmachine: found existing default network
	I1101 09:26:48.637100  349088 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 09:26:48.637762  349088 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001de6a30}
	I1101 09:26:48.637859  349088 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-610936</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 09:26:48.644232  349088 main.go:143] libmachine: creating private network mk-addons-610936 192.168.39.0/24...
	I1101 09:26:48.720421  349088 main.go:143] libmachine: private network mk-addons-610936 192.168.39.0/24 created
	I1101 09:26:48.720825  349088 main.go:143] libmachine: <network>
	  <name>mk-addons-610936</name>
	  <uuid>c04680c9-4ec5-4b42-a8d4-fa5488b481f3</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:29:1d:e7'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 09:26:48.720884  349088 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936 ...
	I1101 09:26:48.720914  349088 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21832-344560/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso
	I1101 09:26:48.720926  349088 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21832-344560/.minikube
	I1101 09:26:48.721004  349088 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21832-344560/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21832-344560/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso...
	I1101 09:26:48.997189  349088 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/id_rsa...
	I1101 09:26:49.157415  349088 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/addons-610936.rawdisk...
	I1101 09:26:49.157462  349088 main.go:143] libmachine: Writing magic tar header
	I1101 09:26:49.157488  349088 main.go:143] libmachine: Writing SSH key tar header
	I1101 09:26:49.157566  349088 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936 ...
	I1101 09:26:49.157634  349088 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936
	I1101 09:26:49.157660  349088 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936 (perms=drwx------)
	I1101 09:26:49.157672  349088 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21832-344560/.minikube/machines
	I1101 09:26:49.157682  349088 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21832-344560/.minikube/machines (perms=drwxr-xr-x)
	I1101 09:26:49.157694  349088 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21832-344560/.minikube
	I1101 09:26:49.157703  349088 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21832-344560/.minikube (perms=drwxr-xr-x)
	I1101 09:26:49.157714  349088 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21832-344560
	I1101 09:26:49.157723  349088 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21832-344560 (perms=drwxrwxr-x)
	I1101 09:26:49.157733  349088 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1101 09:26:49.157743  349088 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1101 09:26:49.157752  349088 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1101 09:26:49.157762  349088 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1101 09:26:49.157771  349088 main.go:143] libmachine: checking permissions on dir: /home
	I1101 09:26:49.157785  349088 main.go:143] libmachine: skipping /home - not owner
	I1101 09:26:49.157790  349088 main.go:143] libmachine: defining domain...
	I1101 09:26:49.159224  349088 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-610936</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/addons-610936.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-610936'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1101 09:26:49.167197  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:15:bc:d9 in network default
	I1101 09:26:49.167848  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:26:49.167888  349088 main.go:143] libmachine: starting domain...
	I1101 09:26:49.167893  349088 main.go:143] libmachine: ensuring networks are active...
	I1101 09:26:49.168767  349088 main.go:143] libmachine: Ensuring network default is active
	I1101 09:26:49.169283  349088 main.go:143] libmachine: Ensuring network mk-addons-610936 is active
	I1101 09:26:49.170094  349088 main.go:143] libmachine: getting domain XML...
	I1101 09:26:49.171390  349088 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-610936</name>
	  <uuid>067cbdb7-aeda-471a-aaf4-ef736820bc12</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/addons-610936.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:ff:5a:50'/>
	      <source network='mk-addons-610936'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:15:bc:d9'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1101 09:26:50.610080  349088 main.go:143] libmachine: waiting for domain to start...
	I1101 09:26:50.611613  349088 main.go:143] libmachine: domain is now running
	I1101 09:26:50.611630  349088 main.go:143] libmachine: waiting for IP...
	I1101 09:26:50.612434  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:26:50.612919  349088 main.go:143] libmachine: no network interface addresses found for domain addons-610936 (source=lease)
	I1101 09:26:50.612936  349088 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:26:50.613211  349088 main.go:143] libmachine: unable to find current IP address of domain addons-610936 in network mk-addons-610936 (interfaces detected: [])
	I1101 09:26:50.613268  349088 retry.go:31] will retry after 191.100412ms: waiting for domain to come up
	I1101 09:26:50.805816  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:26:50.806422  349088 main.go:143] libmachine: no network interface addresses found for domain addons-610936 (source=lease)
	I1101 09:26:50.806439  349088 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:26:50.806763  349088 main.go:143] libmachine: unable to find current IP address of domain addons-610936 in network mk-addons-610936 (interfaces detected: [])
	I1101 09:26:50.806802  349088 retry.go:31] will retry after 376.554484ms: waiting for domain to come up
	I1101 09:26:51.185497  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:26:51.186174  349088 main.go:143] libmachine: no network interface addresses found for domain addons-610936 (source=lease)
	I1101 09:26:51.186199  349088 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:26:51.186511  349088 main.go:143] libmachine: unable to find current IP address of domain addons-610936 in network mk-addons-610936 (interfaces detected: [])
	I1101 09:26:51.186559  349088 retry.go:31] will retry after 420.878905ms: waiting for domain to come up
	I1101 09:26:51.609310  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:26:51.609971  349088 main.go:143] libmachine: no network interface addresses found for domain addons-610936 (source=lease)
	I1101 09:26:51.609994  349088 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:26:51.610341  349088 main.go:143] libmachine: unable to find current IP address of domain addons-610936 in network mk-addons-610936 (interfaces detected: [])
	I1101 09:26:51.610389  349088 retry.go:31] will retry after 566.258468ms: waiting for domain to come up
	I1101 09:26:52.178431  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:26:52.179181  349088 main.go:143] libmachine: no network interface addresses found for domain addons-610936 (source=lease)
	I1101 09:26:52.179209  349088 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:26:52.179569  349088 main.go:143] libmachine: unable to find current IP address of domain addons-610936 in network mk-addons-610936 (interfaces detected: [])
	I1101 09:26:52.179618  349088 retry.go:31] will retry after 510.874727ms: waiting for domain to come up
	I1101 09:26:52.692621  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:26:52.693178  349088 main.go:143] libmachine: no network interface addresses found for domain addons-610936 (source=lease)
	I1101 09:26:52.693208  349088 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:26:52.693504  349088 main.go:143] libmachine: unable to find current IP address of domain addons-610936 in network mk-addons-610936 (interfaces detected: [])
	I1101 09:26:52.693542  349088 retry.go:31] will retry after 644.803122ms: waiting for domain to come up
	I1101 09:26:53.340554  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:26:53.341164  349088 main.go:143] libmachine: no network interface addresses found for domain addons-610936 (source=lease)
	I1101 09:26:53.341184  349088 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:26:53.341490  349088 main.go:143] libmachine: unable to find current IP address of domain addons-610936 in network mk-addons-610936 (interfaces detected: [])
	I1101 09:26:53.341531  349088 retry.go:31] will retry after 1.023512628s: waiting for domain to come up
	I1101 09:26:54.366813  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:26:54.367498  349088 main.go:143] libmachine: no network interface addresses found for domain addons-610936 (source=lease)
	I1101 09:26:54.367519  349088 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:26:54.367825  349088 main.go:143] libmachine: unable to find current IP address of domain addons-610936 in network mk-addons-610936 (interfaces detected: [])
	I1101 09:26:54.367879  349088 retry.go:31] will retry after 1.39212269s: waiting for domain to come up
	I1101 09:26:55.761274  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:26:55.761890  349088 main.go:143] libmachine: no network interface addresses found for domain addons-610936 (source=lease)
	I1101 09:26:55.761912  349088 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:26:55.762245  349088 main.go:143] libmachine: unable to find current IP address of domain addons-610936 in network mk-addons-610936 (interfaces detected: [])
	I1101 09:26:55.762288  349088 retry.go:31] will retry after 1.430220685s: waiting for domain to come up
	I1101 09:26:57.194971  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:26:57.195519  349088 main.go:143] libmachine: no network interface addresses found for domain addons-610936 (source=lease)
	I1101 09:26:57.195537  349088 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:26:57.195885  349088 main.go:143] libmachine: unable to find current IP address of domain addons-610936 in network mk-addons-610936 (interfaces detected: [])
	I1101 09:26:57.195955  349088 retry.go:31] will retry after 2.020848163s: waiting for domain to come up
	I1101 09:26:59.218180  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:26:59.218898  349088 main.go:143] libmachine: no network interface addresses found for domain addons-610936 (source=lease)
	I1101 09:26:59.218919  349088 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:26:59.219347  349088 main.go:143] libmachine: unable to find current IP address of domain addons-610936 in network mk-addons-610936 (interfaces detected: [])
	I1101 09:26:59.219393  349088 retry.go:31] will retry after 2.273208384s: waiting for domain to come up
	I1101 09:27:01.493989  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:01.494592  349088 main.go:143] libmachine: no network interface addresses found for domain addons-610936 (source=lease)
	I1101 09:27:01.494610  349088 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:27:01.494974  349088 main.go:143] libmachine: unable to find current IP address of domain addons-610936 in network mk-addons-610936 (interfaces detected: [])
	I1101 09:27:01.495018  349088 retry.go:31] will retry after 3.392803853s: waiting for domain to come up
	I1101 09:27:04.890722  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:04.891366  349088 main.go:143] libmachine: no network interface addresses found for domain addons-610936 (source=lease)
	I1101 09:27:04.891384  349088 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:27:04.891759  349088 main.go:143] libmachine: unable to find current IP address of domain addons-610936 in network mk-addons-610936 (interfaces detected: [])
	I1101 09:27:04.891802  349088 retry.go:31] will retry after 4.312687921s: waiting for domain to come up
	I1101 09:27:09.206313  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:09.206961  349088 main.go:143] libmachine: domain addons-610936 has current primary IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:09.206984  349088 main.go:143] libmachine: found domain IP: 192.168.39.81
	I1101 09:27:09.206992  349088 main.go:143] libmachine: reserving static IP address...
	I1101 09:27:09.207571  349088 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-610936", mac: "52:54:00:ff:5a:50", ip: "192.168.39.81"} in network mk-addons-610936
	I1101 09:27:09.398745  349088 main.go:143] libmachine: reserved static IP address 192.168.39.81 for domain addons-610936
	I1101 09:27:09.398795  349088 main.go:143] libmachine: waiting for SSH...
	I1101 09:27:09.398806  349088 main.go:143] libmachine: Getting to WaitForSSH function...
	I1101 09:27:09.402334  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:09.402881  349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ff:5a:50}
	I1101 09:27:09.402923  349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:09.403182  349088 main.go:143] libmachine: Using SSH client type: native
	I1101 09:27:09.403470  349088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I1101 09:27:09.403485  349088 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1101 09:27:09.508904  349088 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:27:09.509279  349088 main.go:143] libmachine: domain creation complete
	I1101 09:27:09.510884  349088 machine.go:94] provisionDockerMachine start ...
	I1101 09:27:09.513206  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:09.513568  349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
	I1101 09:27:09.513591  349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:09.513799  349088 main.go:143] libmachine: Using SSH client type: native
	I1101 09:27:09.514069  349088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I1101 09:27:09.514083  349088 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:27:09.617282  349088 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1101 09:27:09.617319  349088 buildroot.go:166] provisioning hostname "addons-610936"
	I1101 09:27:09.620116  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:09.620592  349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
	I1101 09:27:09.620626  349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:09.620836  349088 main.go:143] libmachine: Using SSH client type: native
	I1101 09:27:09.621089  349088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I1101 09:27:09.621105  349088 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-610936 && echo "addons-610936" | sudo tee /etc/hostname
	I1101 09:27:09.747625  349088 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-610936
	
	I1101 09:27:09.750468  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:09.751026  349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
	I1101 09:27:09.751064  349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:09.751283  349088 main.go:143] libmachine: Using SSH client type: native
	I1101 09:27:09.751531  349088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I1101 09:27:09.751555  349088 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-610936' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-610936/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-610936' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:27:09.867133  349088 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:27:09.867168  349088 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21832-344560/.minikube CaCertPath:/home/jenkins/minikube-integration/21832-344560/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21832-344560/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21832-344560/.minikube}
	I1101 09:27:09.867193  349088 buildroot.go:174] setting up certificates
	I1101 09:27:09.867211  349088 provision.go:84] configureAuth start
	I1101 09:27:09.870717  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:09.871266  349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
	I1101 09:27:09.871291  349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:09.874072  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:09.874675  349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
	I1101 09:27:09.874720  349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:09.874997  349088 provision.go:143] copyHostCerts
	I1101 09:27:09.875078  349088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21832-344560/.minikube/ca.pem (1082 bytes)
	I1101 09:27:09.875223  349088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21832-344560/.minikube/cert.pem (1123 bytes)
	I1101 09:27:09.875291  349088 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21832-344560/.minikube/key.pem (1679 bytes)
	I1101 09:27:09.875382  349088 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21832-344560/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21832-344560/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21832-344560/.minikube/certs/ca-key.pem org=jenkins.addons-610936 san=[127.0.0.1 192.168.39.81 addons-610936 localhost minikube]
	I1101 09:27:09.989492  349088 provision.go:177] copyRemoteCerts
	I1101 09:27:09.989556  349088 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:27:09.992515  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:09.992931  349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
	I1101 09:27:09.992954  349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:09.993174  349088 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/id_rsa Username:docker}
	I1101 09:27:10.076686  349088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 09:27:10.110156  349088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1101 09:27:10.144397  349088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 09:27:10.176734  349088 provision.go:87] duration metric: took 309.504075ms to configureAuth
	I1101 09:27:10.176769  349088 buildroot.go:189] setting minikube options for container-runtime
	I1101 09:27:10.176994  349088 config.go:182] Loaded profile config "addons-610936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:27:10.180094  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:10.180526  349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
	I1101 09:27:10.180576  349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:10.180772  349088 main.go:143] libmachine: Using SSH client type: native
	I1101 09:27:10.181020  349088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I1101 09:27:10.181044  349088 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:27:10.423886  349088 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:27:10.423927  349088 machine.go:97] duration metric: took 913.019036ms to provisionDockerMachine
	I1101 09:27:10.423960  349088 client.go:176] duration metric: took 22.202220225s to LocalClient.Create
	I1101 09:27:10.423984  349088 start.go:167] duration metric: took 22.202306595s to libmachine.API.Create "addons-610936"
	I1101 09:27:10.423995  349088 start.go:293] postStartSetup for "addons-610936" (driver="kvm2")
	I1101 09:27:10.424021  349088 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:27:10.424113  349088 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:27:10.427157  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:10.427601  349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
	I1101 09:27:10.427632  349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:10.427844  349088 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/id_rsa Username:docker}
	I1101 09:27:10.511498  349088 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:27:10.517271  349088 info.go:137] Remote host: Buildroot 2025.02
	I1101 09:27:10.517302  349088 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-344560/.minikube/addons for local assets ...
	I1101 09:27:10.517385  349088 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-344560/.minikube/files for local assets ...
	I1101 09:27:10.517414  349088 start.go:296] duration metric: took 93.412558ms for postStartSetup
	I1101 09:27:10.520815  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:10.521283  349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
	I1101 09:27:10.521311  349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:10.521634  349088 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/config.json ...
	I1101 09:27:10.521902  349088 start.go:128] duration metric: took 22.302751877s to createHost
	I1101 09:27:10.524323  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:10.524907  349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
	I1101 09:27:10.524931  349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:10.525104  349088 main.go:143] libmachine: Using SSH client type: native
	I1101 09:27:10.525313  349088 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.81 22 <nil> <nil>}
	I1101 09:27:10.525323  349088 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1101 09:27:10.630156  349088 main.go:143] libmachine: SSH cmd err, output: <nil>: 1761989230.587115009
	
	I1101 09:27:10.630181  349088 fix.go:216] guest clock: 1761989230.587115009
	I1101 09:27:10.630189  349088 fix.go:229] Guest: 2025-11-01 09:27:10.587115009 +0000 UTC Remote: 2025-11-01 09:27:10.521918664 +0000 UTC m=+22.404168301 (delta=65.196345ms)
	I1101 09:27:10.630208  349088 fix.go:200] guest clock delta is within tolerance: 65.196345ms
	I1101 09:27:10.630214  349088 start.go:83] releasing machines lock for "addons-610936", held for 22.411135579s
	I1101 09:27:10.633362  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:10.633787  349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
	I1101 09:27:10.633814  349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:10.634490  349088 ssh_runner.go:195] Run: cat /version.json
	I1101 09:27:10.634691  349088 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:27:10.637655  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:10.638048  349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
	I1101 09:27:10.638073  349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:10.638091  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:10.638260  349088 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/id_rsa Username:docker}
	I1101 09:27:10.638636  349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
	I1101 09:27:10.638668  349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:10.638882  349088 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/id_rsa Username:docker}
	I1101 09:27:10.726485  349088 ssh_runner.go:195] Run: systemctl --version
	I1101 09:27:10.753372  349088 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:27:10.918384  349088 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:27:10.926453  349088 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:27:10.926532  349088 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:27:10.953477  349088 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 09:27:10.953509  349088 start.go:496] detecting cgroup driver to use...
	I1101 09:27:10.953584  349088 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:27:10.975497  349088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:27:10.993511  349088 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:27:10.993614  349088 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:27:11.013163  349088 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:27:11.031045  349088 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:27:11.180352  349088 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:27:11.402043  349088 docker.go:234] disabling docker service ...
	I1101 09:27:11.402149  349088 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:27:11.421224  349088 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:27:11.438153  349088 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:27:11.600805  349088 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:27:11.754881  349088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:27:11.771449  349088 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:27:11.797432  349088 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:27:11.797544  349088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:27:11.812142  349088 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 09:27:11.812249  349088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:27:11.826346  349088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:27:11.841711  349088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:27:11.855380  349088 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:27:11.869917  349088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:27:11.884150  349088 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:27:11.906530  349088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:27:11.920203  349088 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:27:11.932360  349088 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 09:27:11.932437  349088 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 09:27:11.954832  349088 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:27:11.968256  349088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:27:12.115585  349088 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:27:12.234503  349088 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:27:12.234602  349088 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:27:12.240643  349088 start.go:564] Will wait 60s for crictl version
	I1101 09:27:12.240732  349088 ssh_runner.go:195] Run: which crictl
	I1101 09:27:12.245393  349088 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 09:27:12.291466  349088 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1101 09:27:12.291608  349088 ssh_runner.go:195] Run: crio --version
	I1101 09:27:12.323851  349088 ssh_runner.go:195] Run: crio --version
	I1101 09:27:12.358425  349088 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1101 09:27:12.362465  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:12.362850  349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
	I1101 09:27:12.362882  349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:12.363077  349088 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1101 09:27:12.368326  349088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:27:12.385147  349088 kubeadm.go:884] updating cluster {Name:addons-610936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-610936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.81 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:27:12.385306  349088 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:27:12.385374  349088 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:27:12.428654  349088 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1101 09:27:12.428761  349088 ssh_runner.go:195] Run: which lz4
	I1101 09:27:12.433783  349088 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1101 09:27:12.439050  349088 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 09:27:12.439091  349088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1101 09:27:14.042686  349088 crio.go:462] duration metric: took 1.60892747s to copy over tarball
	I1101 09:27:14.042766  349088 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 09:27:15.917932  349088 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.875136305s)
	I1101 09:27:15.917968  349088 crio.go:469] duration metric: took 1.875249656s to extract the tarball
	I1101 09:27:15.917983  349088 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 09:27:15.960792  349088 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:27:16.009430  349088 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:27:16.009457  349088 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:27:16.009466  349088 kubeadm.go:935] updating node { 192.168.39.81 8443 v1.34.1 crio true true} ...
	I1101 09:27:16.009578  349088 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-610936 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.81
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-610936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:27:16.009675  349088 ssh_runner.go:195] Run: crio config
	I1101 09:27:16.060176  349088 cni.go:84] Creating CNI manager for ""
	I1101 09:27:16.060212  349088 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 09:27:16.060242  349088 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:27:16.060276  349088 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.81 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-610936 NodeName:addons-610936 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.81"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.81 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:27:16.060445  349088 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.81
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-610936"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.81"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.81"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:27:16.060527  349088 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:27:16.074680  349088 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:27:16.074776  349088 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:27:16.087881  349088 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1101 09:27:16.111202  349088 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:27:16.133656  349088 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1101 09:27:16.156714  349088 ssh_runner.go:195] Run: grep 192.168.39.81	control-plane.minikube.internal$ /etc/hosts
	I1101 09:27:16.161539  349088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.81	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:27:16.178210  349088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:27:16.328129  349088 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:27:16.365521  349088 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936 for IP: 192.168.39.81
	I1101 09:27:16.365546  349088 certs.go:195] generating shared ca certs ...
	I1101 09:27:16.365564  349088 certs.go:227] acquiring lock for ca certs: {Name:mkba0fe79f6b0ed99353299aaf34c6fbc547c6f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:27:16.365755  349088 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-344560/.minikube/ca.key
	I1101 09:27:16.744900  349088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-344560/.minikube/ca.crt ...
	I1101 09:27:16.744937  349088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-344560/.minikube/ca.crt: {Name:mk70cb9468642ed5e7f9912a400b1e74296dea21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:27:16.745125  349088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-344560/.minikube/ca.key ...
	I1101 09:27:16.745142  349088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-344560/.minikube/ca.key: {Name:mked04b0822cde1b132009ea6307ff8ea52511e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:27:16.745220  349088 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-344560/.minikube/proxy-client-ca.key
	I1101 09:27:16.916593  349088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-344560/.minikube/proxy-client-ca.crt ...
	I1101 09:27:16.916628  349088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-344560/.minikube/proxy-client-ca.crt: {Name:mk898d13bfe08ac956aa016515b4e39e57dce709 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:27:16.916816  349088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-344560/.minikube/proxy-client-ca.key ...
	I1101 09:27:16.916828  349088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-344560/.minikube/proxy-client-ca.key: {Name:mk881f64e8f0f9e8118c2ea53f7a353ac29f8b60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:27:16.916913  349088 certs.go:257] generating profile certs ...
	I1101 09:27:16.916976  349088 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/client.key
	I1101 09:27:16.916991  349088 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/client.crt with IP's: []
	I1101 09:27:17.062434  349088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/client.crt ...
	I1101 09:27:17.062464  349088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/client.crt: {Name:mk4a0448dcedd6f68d492b4d5f914e5cca0df07b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:27:17.062634  349088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/client.key ...
	I1101 09:27:17.062646  349088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/client.key: {Name:mk8a00c8e5b18bb947e29b9b32095da84b4faa70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:27:17.062726  349088 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/apiserver.key.3a89fe33
	I1101 09:27:17.062744  349088 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/apiserver.crt.3a89fe33 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.81]
	I1101 09:27:17.220204  349088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/apiserver.crt.3a89fe33 ...
	I1101 09:27:17.220242  349088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/apiserver.crt.3a89fe33: {Name:mk6e5f9fc47945ea3e26016859030a8f20a5f7ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:27:17.220428  349088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/apiserver.key.3a89fe33 ...
	I1101 09:27:17.220442  349088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/apiserver.key.3a89fe33: {Name:mk0b48ff33f7be98383eb1c773640c67bdeb8d91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:27:17.220515  349088 certs.go:382] copying /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/apiserver.crt.3a89fe33 -> /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/apiserver.crt
	I1101 09:27:17.220593  349088 certs.go:386] copying /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/apiserver.key.3a89fe33 -> /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/apiserver.key
	I1101 09:27:17.220642  349088 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/proxy-client.key
	I1101 09:27:17.220664  349088 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/proxy-client.crt with IP's: []
	I1101 09:27:17.957328  349088 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/proxy-client.crt ...
	I1101 09:27:17.957365  349088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/proxy-client.crt: {Name:mk4099c959afd20f992944add321fedf171c1f59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:27:17.957555  349088 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/proxy-client.key ...
	I1101 09:27:17.957571  349088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/proxy-client.key: {Name:mk6dde8185d059ceb1f1fb5e409351057e2783ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:27:17.957764  349088 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:27:17.957801  349088 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/ca.pem (1082 bytes)
	I1101 09:27:17.957828  349088 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:27:17.957849  349088 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/key.pem (1679 bytes)
	I1101 09:27:17.958435  349088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:27:18.006290  349088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 09:27:18.049283  349088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:27:18.084041  349088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 09:27:18.118882  349088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1101 09:27:18.152497  349088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 09:27:18.187113  349088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:27:18.221898  349088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 09:27:18.257661  349088 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:27:18.292943  349088 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:27:18.316398  349088 ssh_runner.go:195] Run: openssl version
	I1101 09:27:18.324206  349088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:27:18.338607  349088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:27:18.344696  349088 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:27 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:27:18.344792  349088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:27:18.353174  349088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:27:18.368244  349088 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:27:18.374223  349088 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 09:27:18.374290  349088 kubeadm.go:401] StartCluster: {Name:addons-610936 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-610936 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.81 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:27:18.374380  349088 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:27:18.374487  349088 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:27:18.418220  349088 cri.go:89] found id: ""
	I1101 09:27:18.418311  349088 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:27:18.431638  349088 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 09:27:18.445051  349088 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 09:27:18.458177  349088 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 09:27:18.458202  349088 kubeadm.go:158] found existing configuration files:
	
	I1101 09:27:18.458256  349088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 09:27:18.471640  349088 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 09:27:18.471726  349088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 09:27:18.485639  349088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 09:27:18.498284  349088 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 09:27:18.498356  349088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 09:27:18.512788  349088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 09:27:18.526068  349088 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 09:27:18.526134  349088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 09:27:18.539786  349088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 09:27:18.555113  349088 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 09:27:18.555217  349088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 09:27:18.571565  349088 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1101 09:27:18.762017  349088 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 09:27:31.531695  349088 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 09:27:31.531837  349088 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 09:27:31.532005  349088 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 09:27:31.532103  349088 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 09:27:31.532230  349088 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 09:27:31.532316  349088 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 09:27:31.535138  349088 out.go:252]   - Generating certificates and keys ...
	I1101 09:27:31.535262  349088 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 09:27:31.535356  349088 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 09:27:31.535456  349088 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 09:27:31.535514  349088 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 09:27:31.535563  349088 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 09:27:31.535608  349088 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 09:27:31.535652  349088 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 09:27:31.535753  349088 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-610936 localhost] and IPs [192.168.39.81 127.0.0.1 ::1]
	I1101 09:27:31.535803  349088 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 09:27:31.535929  349088 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-610936 localhost] and IPs [192.168.39.81 127.0.0.1 ::1]
	I1101 09:27:31.535987  349088 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 09:27:31.536038  349088 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 09:27:31.536075  349088 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 09:27:31.536122  349088 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 09:27:31.536164  349088 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 09:27:31.536211  349088 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 09:27:31.536259  349088 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 09:27:31.536326  349088 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 09:27:31.536392  349088 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 09:27:31.536465  349088 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 09:27:31.536527  349088 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 09:27:31.537895  349088 out.go:252]   - Booting up control plane ...
	I1101 09:27:31.537990  349088 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 09:27:31.538063  349088 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 09:27:31.538121  349088 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 09:27:31.538211  349088 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 09:27:31.538300  349088 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 09:27:31.538394  349088 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 09:27:31.538469  349088 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 09:27:31.538504  349088 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 09:27:31.538702  349088 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 09:27:31.538838  349088 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 09:27:31.538912  349088 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00146068s
	I1101 09:27:31.539018  349088 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 09:27:31.539146  349088 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.81:8443/livez
	I1101 09:27:31.539227  349088 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 09:27:31.539296  349088 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 09:27:31.539356  349088 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.713243058s
	I1101 09:27:31.539429  349088 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.738987655s
	I1101 09:27:31.539508  349088 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.004913886s
	I1101 09:27:31.539631  349088 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 09:27:31.539734  349088 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 09:27:31.539786  349088 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 09:27:31.539971  349088 kubeadm.go:319] [mark-control-plane] Marking the node addons-610936 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 09:27:31.540031  349088 kubeadm.go:319] [bootstrap-token] Using token: hxtxuv.39vanw3sg4xqodfn
	I1101 09:27:31.541457  349088 out.go:252]   - Configuring RBAC rules ...
	I1101 09:27:31.541610  349088 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 09:27:31.541720  349088 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 09:27:31.541880  349088 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 09:27:31.542033  349088 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 09:27:31.542167  349088 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 09:27:31.542271  349088 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 09:27:31.542400  349088 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 09:27:31.542466  349088 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 09:27:31.542522  349088 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 09:27:31.542535  349088 kubeadm.go:319] 
	I1101 09:27:31.542582  349088 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 09:27:31.542597  349088 kubeadm.go:319] 
	I1101 09:27:31.542721  349088 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 09:27:31.542738  349088 kubeadm.go:319] 
	I1101 09:27:31.542770  349088 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 09:27:31.542822  349088 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 09:27:31.542887  349088 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 09:27:31.542899  349088 kubeadm.go:319] 
	I1101 09:27:31.542944  349088 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 09:27:31.542951  349088 kubeadm.go:319] 
	I1101 09:27:31.542991  349088 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 09:27:31.542996  349088 kubeadm.go:319] 
	I1101 09:27:31.543037  349088 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 09:27:31.543129  349088 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 09:27:31.543222  349088 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 09:27:31.543232  349088 kubeadm.go:319] 
	I1101 09:27:31.543333  349088 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 09:27:31.543409  349088 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 09:27:31.543416  349088 kubeadm.go:319] 
	I1101 09:27:31.543483  349088 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token hxtxuv.39vanw3sg4xqodfn \
	I1101 09:27:31.543568  349088 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8453eb9bfec31a6f8a04d37b2b2ee7df64866720c9de26f8457973b66dd9966b \
	I1101 09:27:31.543594  349088 kubeadm.go:319] 	--control-plane 
	I1101 09:27:31.543598  349088 kubeadm.go:319] 
	I1101 09:27:31.543663  349088 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 09:27:31.543669  349088 kubeadm.go:319] 
	I1101 09:27:31.543771  349088 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token hxtxuv.39vanw3sg4xqodfn \
	I1101 09:27:31.543948  349088 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8453eb9bfec31a6f8a04d37b2b2ee7df64866720c9de26f8457973b66dd9966b 
	I1101 09:27:31.543974  349088 cni.go:84] Creating CNI manager for ""
	I1101 09:27:31.543987  349088 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 09:27:31.545681  349088 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 09:27:31.547280  349088 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 09:27:31.566888  349088 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1101 09:27:31.592379  349088 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 09:27:31.592444  349088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:27:31.592477  349088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-610936 minikube.k8s.io/updated_at=2025_11_01T09_27_31_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d minikube.k8s.io/name=addons-610936 minikube.k8s.io/primary=true
	I1101 09:27:31.738248  349088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:27:31.827104  349088 ops.go:34] apiserver oom_adj: -16
	I1101 09:27:32.239332  349088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:27:32.738661  349088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:27:33.238579  349088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:27:33.738989  349088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:27:34.238462  349088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:27:34.739204  349088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:27:35.238611  349088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:27:35.331026  349088 kubeadm.go:1114] duration metric: took 3.738648845s to wait for elevateKubeSystemPrivileges
	I1101 09:27:35.331104  349088 kubeadm.go:403] duration metric: took 16.956793709s to StartCluster
	I1101 09:27:35.331134  349088 settings.go:142] acquiring lock: {Name:mk0cdfdd584044c1d93f88e46e35ef3af10fed81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:27:35.331283  349088 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21832-344560/kubeconfig
	I1101 09:27:35.331763  349088 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-344560/kubeconfig: {Name:mkaf75364e29c8ee4b260af678d355333969cf4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:27:35.332032  349088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 09:27:35.332033  349088 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.81 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:27:35.332067  349088 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1101 09:27:35.332278  349088 addons.go:70] Setting yakd=true in profile "addons-610936"
	I1101 09:27:35.332287  349088 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-610936"
	I1101 09:27:35.332298  349088 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-610936"
	I1101 09:27:35.332309  349088 addons.go:70] Setting registry=true in profile "addons-610936"
	I1101 09:27:35.332319  349088 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-610936"
	I1101 09:27:35.332321  349088 addons.go:239] Setting addon registry=true in "addons-610936"
	I1101 09:27:35.332351  349088 host.go:66] Checking if "addons-610936" exists ...
	I1101 09:27:35.332353  349088 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-610936"
	I1101 09:27:35.332364  349088 addons.go:70] Setting default-storageclass=true in profile "addons-610936"
	I1101 09:27:35.332363  349088 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-610936"
	I1101 09:27:35.332381  349088 host.go:66] Checking if "addons-610936" exists ...
	I1101 09:27:35.332388  349088 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-610936"
	I1101 09:27:35.332390  349088 addons.go:70] Setting gcp-auth=true in profile "addons-610936"
	I1101 09:27:35.332409  349088 mustload.go:66] Loading cluster: addons-610936
	I1101 09:27:35.332398  349088 addons.go:70] Setting cloud-spanner=true in profile "addons-610936"
	I1101 09:27:35.332302  349088 addons.go:239] Setting addon yakd=true in "addons-610936"
	I1101 09:27:35.332444  349088 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-610936"
	I1101 09:27:35.332451  349088 host.go:66] Checking if "addons-610936" exists ...
	I1101 09:27:35.332456  349088 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-610936"
	I1101 09:27:35.332464  349088 addons.go:70] Setting ingress-dns=true in profile "addons-610936"
	I1101 09:27:35.332531  349088 addons.go:239] Setting addon ingress-dns=true in "addons-610936"
	I1101 09:27:35.332567  349088 config.go:182] Loaded profile config "addons-610936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:27:35.332571  349088 host.go:66] Checking if "addons-610936" exists ...
	I1101 09:27:35.333079  349088 addons.go:70] Setting ingress=true in profile "addons-610936"
	I1101 09:27:35.333102  349088 addons.go:239] Setting addon ingress=true in "addons-610936"
	I1101 09:27:35.333135  349088 host.go:66] Checking if "addons-610936" exists ...
	I1101 09:27:35.333187  349088 addons.go:70] Setting registry-creds=true in profile "addons-610936"
	I1101 09:27:35.333216  349088 addons.go:239] Setting addon registry-creds=true in "addons-610936"
	I1101 09:27:35.333221  349088 addons.go:70] Setting storage-provisioner=true in profile "addons-610936"
	I1101 09:27:35.333240  349088 addons.go:239] Setting addon storage-provisioner=true in "addons-610936"
	I1101 09:27:35.333269  349088 host.go:66] Checking if "addons-610936" exists ...
	I1101 09:27:35.333286  349088 addons.go:70] Setting inspektor-gadget=true in profile "addons-610936"
	I1101 09:27:35.333301  349088 addons.go:239] Setting addon inspektor-gadget=true in "addons-610936"
	I1101 09:27:35.333318  349088 host.go:66] Checking if "addons-610936" exists ...
	I1101 09:27:35.332380  349088 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-610936"
	I1101 09:27:35.332355  349088 host.go:66] Checking if "addons-610936" exists ...
	I1101 09:27:35.332279  349088 config.go:182] Loaded profile config "addons-610936": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:27:35.334158  349088 addons.go:70] Setting volcano=true in profile "addons-610936"
	I1101 09:27:35.334179  349088 addons.go:239] Setting addon volcano=true in "addons-610936"
	I1101 09:27:35.334203  349088 host.go:66] Checking if "addons-610936" exists ...
	I1101 09:27:35.332414  349088 host.go:66] Checking if "addons-610936" exists ...
	I1101 09:27:35.334441  349088 addons.go:70] Setting metrics-server=true in profile "addons-610936"
	I1101 09:27:35.334463  349088 addons.go:239] Setting addon metrics-server=true in "addons-610936"
	I1101 09:27:35.334487  349088 host.go:66] Checking if "addons-610936" exists ...
	I1101 09:27:35.334591  349088 addons.go:70] Setting volumesnapshots=true in profile "addons-610936"
	I1101 09:27:35.334629  349088 addons.go:239] Setting addon volumesnapshots=true in "addons-610936"
	I1101 09:27:35.334654  349088 out.go:179] * Verifying Kubernetes components...
	I1101 09:27:35.334657  349088 host.go:66] Checking if "addons-610936" exists ...
	I1101 09:27:35.333270  349088 host.go:66] Checking if "addons-610936" exists ...
	I1101 09:27:35.332432  349088 addons.go:239] Setting addon cloud-spanner=true in "addons-610936"
	I1101 09:27:35.334859  349088 host.go:66] Checking if "addons-610936" exists ...
	I1101 09:27:35.336727  349088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:27:35.338761  349088 host.go:66] Checking if "addons-610936" exists ...
	I1101 09:27:35.341333  349088 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-610936"
	I1101 09:27:35.341384  349088 host.go:66] Checking if "addons-610936" exists ...
	W1101 09:27:35.342991  349088 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1101 09:27:35.343436  349088 addons.go:239] Setting addon default-storageclass=true in "addons-610936"
	I1101 09:27:35.343479  349088 host.go:66] Checking if "addons-610936" exists ...
	I1101 09:27:35.344044  349088 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1101 09:27:35.344057  349088 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1101 09:27:35.344077  349088 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1101 09:27:35.344124  349088 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 09:27:35.344137  349088 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1101 09:27:35.345201  349088 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1101 09:27:35.345206  349088 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1101 09:27:35.345206  349088 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:27:35.345217  349088 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1101 09:27:35.345216  349088 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1101 09:27:35.345230  349088 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1101 09:27:35.345246  349088 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1101 09:27:35.345770  349088 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:27:35.346847  349088 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:27:35.347023  349088 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 09:27:35.347049  349088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1101 09:27:35.346898  349088 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1101 09:27:35.347151  349088 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1101 09:27:35.346948  349088 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1101 09:27:35.347694  349088 out.go:179]   - Using image docker.io/busybox:stable
	I1101 09:27:35.347704  349088 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 09:27:35.348099  349088 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 09:27:35.347745  349088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1101 09:27:35.348195  349088 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1101 09:27:35.347766  349088 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:27:35.348271  349088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:27:35.348508  349088 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1101 09:27:35.348551  349088 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 09:27:35.348557  349088 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 09:27:35.348565  349088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1101 09:27:35.348568  349088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1101 09:27:35.348638  349088 out.go:179]   - Using image docker.io/registry:3.0.0
	I1101 09:27:35.348643  349088 addons.go:436] installing /etc/kubernetes/addons/ig-crd.yaml
	I1101 09:27:35.348757  349088 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 09:27:35.348763  349088 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1101 09:27:35.348767  349088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1101 09:27:35.348885  349088 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1101 09:27:35.349507  349088 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1101 09:27:35.349528  349088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1101 09:27:35.350226  349088 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1101 09:27:35.350242  349088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1101 09:27:35.350843  349088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1101 09:27:35.350875  349088 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1101 09:27:35.351638  349088 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 09:27:35.353581  349088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1101 09:27:35.353647  349088 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 09:27:35.353662  349088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1101 09:27:35.353731  349088 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 09:27:35.353744  349088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1101 09:27:35.356403  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:35.356541  349088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1101 09:27:35.357052  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:35.357229  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:35.358212  349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
	I1101 09:27:35.358255  349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:35.358556  349088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1101 09:27:35.359462  349088 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/id_rsa Username:docker}
	I1101 09:27:35.359589  349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
	I1101 09:27:35.359618  349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:35.359726  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:35.359805  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:35.359276  349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
	I1101 09:27:35.360016  349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:35.360247  349088 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/id_rsa Username:docker}
	I1101 09:27:35.360847  349088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1101 09:27:35.361116  349088 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/id_rsa Username:docker}
	I1101 09:27:35.361266  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:35.361851  349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
	I1101 09:27:35.361899  349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:35.361913  349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
	I1101 09:27:35.361957  349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:35.362180  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:35.362453  349088 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/id_rsa Username:docker}
	I1101 09:27:35.362578  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:35.362699  349088 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/id_rsa Username:docker}
	I1101 09:27:35.362843  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:35.362999  349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
	I1101 09:27:35.363031  349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:35.363081  349088 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1101 09:27:35.363766  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:35.363800  349088 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/id_rsa Username:docker}
	I1101 09:27:35.364037  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:35.364074  349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
	I1101 09:27:35.364100  349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:35.364190  349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
	I1101 09:27:35.364227  349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:35.364237  349088 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1101 09:27:35.364252  349088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1101 09:27:35.364330  349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
	I1101 09:27:35.364364  349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:35.364827  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:35.364879  349088 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/id_rsa Username:docker}
	I1101 09:27:35.364992  349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
	I1101 09:27:35.365006  349088 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/id_rsa Username:docker}
	I1101 09:27:35.365026  349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:35.365117  349088 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/id_rsa Username:docker}
	I1101 09:27:35.365416  349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
	I1101 09:27:35.365451  349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:35.365501  349088 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/id_rsa Username:docker}
	I1101 09:27:35.365680  349088 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/id_rsa Username:docker}
	I1101 09:27:35.365846  349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
	I1101 09:27:35.365893  349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:35.366234  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:35.366372  349088 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/id_rsa Username:docker}
	I1101 09:27:35.366394  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:35.366787  349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
	I1101 09:27:35.366826  349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
	I1101 09:27:35.366844  349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:35.366854  349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:35.367138  349088 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/id_rsa Username:docker}
	I1101 09:27:35.367141  349088 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/id_rsa Username:docker}
	I1101 09:27:35.368547  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:35.369191  349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
	I1101 09:27:35.369225  349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:35.369389  349088 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/id_rsa Username:docker}
	W1101 09:27:35.729128  349088 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:59904->192.168.39.81:22: read: connection reset by peer
	I1101 09:27:35.729171  349088 retry.go:31] will retry after 190.903161ms: ssh: handshake failed: read tcp 192.168.39.1:59904->192.168.39.81:22: read: connection reset by peer
	I1101 09:27:36.169539  349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 09:27:36.172047  349088 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1101 09:27:36.172080  349088 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1101 09:27:36.197501  349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 09:27:36.215417  349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:27:36.216345  349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:27:36.218262  349088 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 09:27:36.218284  349088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1101 09:27:36.233277  349088 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:27:36.233306  349088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1101 09:27:36.276463  349088 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:27:36.276504  349088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 09:27:36.319910  349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 09:27:36.401223  349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 09:27:36.402616  349088 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1101 09:27:36.402645  349088 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1101 09:27:36.474457  349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 09:27:36.545082  349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1101 09:27:36.565365  349088 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1101 09:27:36.565405  349088 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1101 09:27:36.716748  349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 09:27:36.757480  349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:27:36.822242  349088 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 09:27:36.822276  349088 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 09:27:36.899735  349088 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1101 09:27:36.899768  349088 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1101 09:27:37.256955  349088 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1101 09:27:37.256994  349088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1101 09:27:37.400065  349088 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1101 09:27:37.400097  349088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1101 09:27:37.407524  349088 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1101 09:27:37.407553  349088 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1101 09:27:37.688784  349088 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 09:27:37.688814  349088 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 09:27:37.822713  349088 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1101 09:27:37.822758  349088 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1101 09:27:37.951507  349088 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1101 09:27:37.951539  349088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1101 09:27:38.005672  349088 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1101 09:27:38.005711  349088 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1101 09:27:38.029745  349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1101 09:27:38.323329  349088 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1101 09:27:38.323367  349088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1101 09:27:38.456225  349088 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1101 09:27:38.456256  349088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1101 09:27:38.460477  349088 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1101 09:27:38.460514  349088 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1101 09:27:38.500149  349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 09:27:38.728678  349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1101 09:27:38.896168  349088 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1101 09:27:38.896207  349088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1101 09:27:38.896644  349088 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 09:27:38.896672  349088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1101 09:27:39.290048  349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.120463391s)
	I1101 09:27:39.290105  349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.0925653s)
	I1101 09:27:39.290134  349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.07376527s)
	I1101 09:27:39.291321  349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 09:27:39.472727  349088 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1101 09:27:39.472764  349088 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1101 09:27:40.004706  349088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1101 09:27:40.004742  349088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1101 09:27:40.542351  349088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1101 09:27:40.542382  349088 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1101 09:27:41.141184  349088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1101 09:27:41.141212  349088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1101 09:27:41.601642  349088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1101 09:27:41.601674  349088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1101 09:27:42.325101  349088 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 09:27:42.325139  349088 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1101 09:27:42.613924  349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.398452068s)
	I1101 09:27:42.613977  349088 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (6.337439597s)
	I1101 09:27:42.614004  349088 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1101 09:27:42.614023  349088 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.337520699s)
	I1101 09:27:42.614092  349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.294151559s)
	I1101 09:27:42.614972  349088 node_ready.go:35] waiting up to 6m0s for node "addons-610936" to be "Ready" ...
	I1101 09:27:42.637786  349088 node_ready.go:49] node "addons-610936" is "Ready"
	I1101 09:27:42.637826  349088 node_ready.go:38] duration metric: took 22.817502ms for node "addons-610936" to be "Ready" ...
	I1101 09:27:42.637844  349088 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:27:42.637919  349088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:27:42.790062  349088 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1101 09:27:42.793672  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:42.794246  349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
	I1101 09:27:42.794278  349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:42.794489  349088 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/id_rsa Username:docker}
	I1101 09:27:42.852441  349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 09:27:43.118636  349088 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-610936" context rescaled to 1 replicas
	I1101 09:27:43.411524  349088 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1101 09:27:43.858740  349088 addons.go:239] Setting addon gcp-auth=true in "addons-610936"
	I1101 09:27:43.858802  349088 host.go:66] Checking if "addons-610936" exists ...
	I1101 09:27:43.860804  349088 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1101 09:27:43.863633  349088 main.go:143] libmachine: domain addons-610936 has defined MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:43.864100  349088 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ff:5a:50", ip: ""} in network mk-addons-610936: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:05 +0000 UTC Type:0 Mac:52:54:00:ff:5a:50 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:addons-610936 Clientid:01:52:54:00:ff:5a:50}
	I1101 09:27:43.864124  349088 main.go:143] libmachine: domain addons-610936 has defined IP address 192.168.39.81 and MAC address 52:54:00:ff:5a:50 in network mk-addons-610936
	I1101 09:27:43.864271  349088 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/addons-610936/id_rsa Username:docker}
	I1101 09:27:45.858778  349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.457511483s)
	I1101 09:27:45.858837  349088 addons.go:480] Verifying addon ingress=true in "addons-610936"
	I1101 09:27:45.858849  349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.384349472s)
	I1101 09:27:45.858915  349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.313798771s)
	I1101 09:27:45.858959  349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.14217526s)
	I1101 09:27:45.859051  349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (9.101542047s)
	W1101 09:27:45.859083  349088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:27:45.859107  349088 retry.go:31] will retry after 325.822187ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:27:45.859161  349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.829382241s)
	I1101 09:27:45.859198  349088 addons.go:480] Verifying addon registry=true in "addons-610936"
	I1101 09:27:45.859336  349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.130621873s)
	I1101 09:27:45.859292  349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.3591024s)
	I1101 09:27:45.859422  349088 addons.go:480] Verifying addon metrics-server=true in "addons-610936"
	I1101 09:27:45.860418  349088 out.go:179] * Verifying ingress addon...
	I1101 09:27:45.861227  349088 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-610936 service yakd-dashboard -n yakd-dashboard
	
	I1101 09:27:45.861262  349088 out.go:179] * Verifying registry addon...
	I1101 09:27:45.862656  349088 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1101 09:27:45.863701  349088 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1101 09:27:45.892648  349088 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1101 09:27:45.892677  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:27:45.892723  349088 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1101 09:27:45.892745  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:27:45.910111  349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.618743413s)
	I1101 09:27:45.910140  349088 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.272199257s)
	I1101 09:27:45.910173  349088 api_server.go:72] duration metric: took 10.578008261s to wait for apiserver process to appear ...
	I1101 09:27:45.910181  349088 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:27:45.910207  349088 api_server.go:253] Checking apiserver healthz at https://192.168.39.81:8443/healthz ...
	W1101 09:27:45.910196  349088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 09:27:45.910344  349088 retry.go:31] will retry after 301.381616ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 09:27:45.930071  349088 api_server.go:279] https://192.168.39.81:8443/healthz returned 200:
	ok
	I1101 09:27:45.942691  349088 api_server.go:141] control plane version: v1.34.1
	I1101 09:27:45.942722  349088 api_server.go:131] duration metric: took 32.53467ms to wait for apiserver health ...
	I1101 09:27:45.942732  349088 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:27:46.016731  349088 system_pods.go:59] 16 kube-system pods found
	I1101 09:27:46.016782  349088 system_pods.go:61] "amd-gpu-device-plugin-5pdrl" [b8e4e785-d8f6-4d48-8364-9ae272d16ed4] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 09:27:46.016793  349088 system_pods.go:61] "coredns-66bc5c9577-87j4r" [cf4e582b-3f40-44c4-afae-bfbf0a9399a9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:27:46.016801  349088 system_pods.go:61] "coredns-66bc5c9577-gbqkt" [5e62dfed-a46f-4e51-a84d-07825fc7bc70] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:27:46.016809  349088 system_pods.go:61] "etcd-addons-610936" [7f2ac281-5593-4be9-b542-b326a101d645] Running
	I1101 09:27:46.016815  349088 system_pods.go:61] "kube-apiserver-addons-610936" [607f02cf-1d16-4146-a5c2-a31b94c00d75] Running
	I1101 09:27:46.016819  349088 system_pods.go:61] "kube-controller-manager-addons-610936" [e1d392bf-c7a3-456a-a1df-9e5e4f598dde] Running
	I1101 09:27:46.016825  349088 system_pods.go:61] "kube-ingress-dns-minikube" [2b8eca17-1e14-4918-b8d0-991e96bd3770] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 09:27:46.016828  349088 system_pods.go:61] "kube-proxy-wm94c" [e0d02112-bf3c-4352-a3de-02ca7e44f294] Running
	I1101 09:27:46.016832  349088 system_pods.go:61] "kube-scheduler-addons-610936" [37a732ca-6715-40d4-b050-425213eb3eac] Running
	I1101 09:27:46.016837  349088 system_pods.go:61] "metrics-server-85b7d694d7-br7l2" [04c85380-ef98-4ac1-bf3a-5609222c5b88] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:27:46.016844  349088 system_pods.go:61] "nvidia-device-plugin-daemonset-668jz" [8afeb20e-4679-4c6a-b8aa-615540852043] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 09:27:46.016852  349088 system_pods.go:61] "registry-6b586f9694-zk6f9" [8ca1aaec-2bd9-4d71-8886-79afedd32769] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:27:46.016857  349088 system_pods.go:61] "registry-creds-764b6fb674-nz5gr" [824af6be-aaa9-462e-afe0-7c82d519ffe4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:27:46.016873  349088 system_pods.go:61] "registry-proxy-p6swb" [bb847846-a739-4165-9043-1a8601f04bd7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 09:27:46.016879  349088 system_pods.go:61] "snapshot-controller-7d9fbc56b8-g6nhw" [9955c1b1-f135-48db-be87-91fdaaa7c2f0] Pending
	I1101 09:27:46.016889  349088 system_pods.go:61] "storage-provisioner" [bbbac5d1-8445-4301-918f-9e1633b097d2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:27:46.016899  349088 system_pods.go:74] duration metric: took 74.158799ms to wait for pod list to return data ...
	I1101 09:27:46.016915  349088 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:27:46.070857  349088 default_sa.go:45] found service account: "default"
	I1101 09:27:46.070905  349088 default_sa.go:55] duration metric: took 53.980293ms for default service account to be created ...
	I1101 09:27:46.070920  349088 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:27:46.113006  349088 system_pods.go:86] 17 kube-system pods found
	I1101 09:27:46.113053  349088 system_pods.go:89] "amd-gpu-device-plugin-5pdrl" [b8e4e785-d8f6-4d48-8364-9ae272d16ed4] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 09:27:46.113063  349088 system_pods.go:89] "coredns-66bc5c9577-87j4r" [cf4e582b-3f40-44c4-afae-bfbf0a9399a9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:27:46.113076  349088 system_pods.go:89] "coredns-66bc5c9577-gbqkt" [5e62dfed-a46f-4e51-a84d-07825fc7bc70] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:27:46.113082  349088 system_pods.go:89] "etcd-addons-610936" [7f2ac281-5593-4be9-b542-b326a101d645] Running
	I1101 09:27:46.113087  349088 system_pods.go:89] "kube-apiserver-addons-610936" [607f02cf-1d16-4146-a5c2-a31b94c00d75] Running
	I1101 09:27:46.113092  349088 system_pods.go:89] "kube-controller-manager-addons-610936" [e1d392bf-c7a3-456a-a1df-9e5e4f598dde] Running
	I1101 09:27:46.113118  349088 system_pods.go:89] "kube-ingress-dns-minikube" [2b8eca17-1e14-4918-b8d0-991e96bd3770] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 09:27:46.113130  349088 system_pods.go:89] "kube-proxy-wm94c" [e0d02112-bf3c-4352-a3de-02ca7e44f294] Running
	I1101 09:27:46.113137  349088 system_pods.go:89] "kube-scheduler-addons-610936" [37a732ca-6715-40d4-b050-425213eb3eac] Running
	I1101 09:27:46.113148  349088 system_pods.go:89] "metrics-server-85b7d694d7-br7l2" [04c85380-ef98-4ac1-bf3a-5609222c5b88] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 09:27:46.113156  349088 system_pods.go:89] "nvidia-device-plugin-daemonset-668jz" [8afeb20e-4679-4c6a-b8aa-615540852043] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 09:27:46.113168  349088 system_pods.go:89] "registry-6b586f9694-zk6f9" [8ca1aaec-2bd9-4d71-8886-79afedd32769] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 09:27:46.113175  349088 system_pods.go:89] "registry-creds-764b6fb674-nz5gr" [824af6be-aaa9-462e-afe0-7c82d519ffe4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 09:27:46.113184  349088 system_pods.go:89] "registry-proxy-p6swb" [bb847846-a739-4165-9043-1a8601f04bd7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 09:27:46.113189  349088 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d6tfl" [e71257a8-811d-449a-8d78-9fb66dbb5379] Pending
	I1101 09:27:46.113200  349088 system_pods.go:89] "snapshot-controller-7d9fbc56b8-g6nhw" [9955c1b1-f135-48db-be87-91fdaaa7c2f0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 09:27:46.113206  349088 system_pods.go:89] "storage-provisioner" [bbbac5d1-8445-4301-918f-9e1633b097d2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:27:46.113218  349088 system_pods.go:126] duration metric: took 42.28983ms to wait for k8s-apps to be running ...
	I1101 09:27:46.113233  349088 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:27:46.113295  349088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:27:46.185838  349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:27:46.212282  349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 09:27:46.374505  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:27:46.376420  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:27:46.884887  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:27:46.887176  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:27:47.457309  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:27:47.468305  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:27:47.580480  349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.727982175s)
	I1101 09:27:47.580531  349088 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-610936"
	I1101 09:27:47.580591  349088 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.719752323s)
	I1101 09:27:47.580632  349088 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.4673145s)
	I1101 09:27:47.580721  349088 system_svc.go:56] duration metric: took 1.467472013s WaitForService to wait for kubelet
	I1101 09:27:47.580743  349088 kubeadm.go:587] duration metric: took 12.248577708s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:27:47.580770  349088 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:27:47.582183  349088 out.go:179] * Verifying csi-hostpath-driver addon...
	I1101 09:27:47.582186  349088 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 09:27:47.584133  349088 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1101 09:27:47.584735  349088 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1101 09:27:47.585467  349088 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1101 09:27:47.585491  349088 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1101 09:27:47.663562  349088 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1101 09:27:47.663591  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:27:47.666508  349088 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 09:27:47.666540  349088 node_conditions.go:123] node cpu capacity is 2
	I1101 09:27:47.666560  349088 node_conditions.go:105] duration metric: took 85.78252ms to run NodePressure ...
	I1101 09:27:47.666576  349088 start.go:242] waiting for startup goroutines ...
	I1101 09:27:47.881842  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:27:47.882842  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:27:47.886528  349088 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1101 09:27:47.886556  349088 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1101 09:27:48.090205  349088 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 09:27:48.090230  349088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1101 09:27:48.091584  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:27:48.238218  349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 09:27:48.372289  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:27:48.374702  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:27:48.590458  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:27:48.872682  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:27:48.875219  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:27:49.093604  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:27:49.374826  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:27:49.375040  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:27:49.594000  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:27:49.766509  349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.58059868s)
	W1101 09:27:49.766571  349088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:27:49.766602  349088 retry.go:31] will retry after 498.527289ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:27:49.766610  349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.554281701s)
	I1101 09:27:49.887598  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:27:49.888126  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:27:50.098476  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:27:50.243456  349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.005192092s)
	I1101 09:27:50.244608  349088 addons.go:480] Verifying addon gcp-auth=true in "addons-610936"
	I1101 09:27:50.246395  349088 out.go:179] * Verifying gcp-auth addon...
	I1101 09:27:50.248414  349088 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1101 09:27:50.266019  349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:27:50.289275  349088 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1101 09:27:50.289302  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:27:50.387406  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:27:50.388029  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:27:50.601719  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:27:50.752645  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:27:50.868974  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:27:50.874811  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:27:51.090819  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:27:51.255421  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:27:51.372275  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:27:51.373788  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:27:51.589814  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:27:51.755758  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:27:51.875829  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:27:51.876484  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:27:52.091604  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:27:52.125154  349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.859078037s)
	W1101 09:27:52.125214  349088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:27:52.125243  349088 retry.go:31] will retry after 387.959811ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:27:52.254807  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:27:52.370968  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:27:52.373483  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:27:52.513666  349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:27:52.593530  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:27:52.754905  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:27:52.871758  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:27:52.873590  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:27:53.090947  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:27:53.253461  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:27:53.375740  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:27:53.377742  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:27:53.594336  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:27:53.753713  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:27:53.870968  349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.357252791s)
	W1101 09:27:53.871039  349088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:27:53.871068  349088 retry.go:31] will retry after 850.837671ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:27:53.887158  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:27:53.888402  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:27:54.092053  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:27:54.255688  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:27:54.371469  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:27:54.372537  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:27:54.591180  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:27:54.722435  349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:27:54.755187  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:27:54.868852  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:27:54.877857  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:27:55.091104  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:27:55.254917  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:27:55.370838  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:27:55.372955  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:27:55.593420  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:27:55.756621  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:27:55.843886  349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.121372319s)
	W1101 09:27:55.843975  349088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:27:55.844006  349088 retry.go:31] will retry after 934.689197ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:27:55.867479  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:27:55.869106  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:27:56.090755  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:27:56.251968  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:27:56.367783  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:27:56.369239  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:27:56.589914  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:27:56.754432  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:27:56.779656  349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:27:56.869071  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:27:56.871332  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:27:57.091469  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:27:57.255688  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:27:57.367149  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:27:57.372458  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:27:57.589425  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:27:57.754261  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 09:27:57.763628  349088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:27:57.763662  349088 retry.go:31] will retry after 1.073735115s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:27:57.866539  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:27:57.869215  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:27:58.091779  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:27:58.253148  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:27:58.368730  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:27:58.370466  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:27:58.589950  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:27:58.754656  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:27:58.837614  349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:27:58.870757  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:27:58.872826  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:27:59.096397  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:27:59.255694  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:27:59.369690  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:27:59.375404  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:27:59.590707  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:27:59.755775  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:27:59.873541  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:27:59.877816  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:00.094827  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:00.150676  349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.313002265s)
	W1101 09:28:00.150741  349088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:28:00.150775  349088 retry.go:31] will retry after 2.397028196s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:28:00.255148  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:00.368283  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:00.375062  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:28:00.588892  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:00.753094  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:00.872702  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:28:00.872919  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:01.089828  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:01.251883  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:01.367197  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:01.368944  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:28:01.590146  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:01.755612  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:01.870183  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:01.872692  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:28:02.090361  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:02.254623  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:02.367577  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:28:02.368778  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:02.548981  349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:28:02.589611  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:02.752581  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:02.867818  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:28:02.870083  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:03.089513  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:03.253757  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:03.371426  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:28:03.375114  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:03.592484  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:03.753746  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:03.766750  349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.217712585s)
	W1101 09:28:03.766808  349088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:28:03.766839  349088 retry.go:31] will retry after 4.826998891s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:28:03.876688  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:28:03.878536  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:04.092507  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:04.255449  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:04.371832  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:04.376973  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:28:04.590170  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:04.753896  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:04.872968  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:28:04.874737  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:05.099436  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:05.253777  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:05.374511  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:05.375604  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:28:05.591029  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:05.756054  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:05.870902  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:28:05.872382  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:06.582219  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:06.582755  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:06.584323  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:06.584428  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:28:06.589648  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:06.755643  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:06.872236  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:06.872294  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:28:07.092463  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:07.255721  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:07.373419  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:07.374078  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:28:07.590279  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:07.752724  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:07.868230  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:28:07.870830  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:08.093256  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:08.253779  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:08.365989  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:08.367968  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:28:08.589422  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:08.594584  349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:28:08.751538  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:08.972714  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:28:08.975758  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:09.090562  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:09.253467  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:09.369352  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:28:09.369386  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:09.592492  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:09.754674  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:09.795361  349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.2007288s)
	W1101 09:28:09.795502  349088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:28:09.795533  349088 retry.go:31] will retry after 3.483295677s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:28:09.889588  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:28:09.889754  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:10.091064  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:10.252759  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:10.369205  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:10.370615  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:28:10.591931  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:10.755265  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:10.869266  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:10.873140  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:28:11.090724  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:11.258190  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:11.842575  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:11.842638  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:11.843463  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:11.844289  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:28:11.866936  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:11.868477  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:28:12.094180  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:12.257779  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:12.369335  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:12.370262  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:28:12.590699  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:12.753997  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:12.873239  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:28:12.873838  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:13.096391  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:13.262598  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:13.279807  349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:28:13.375199  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:13.375305  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:28:13.594719  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:13.756018  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:13.870499  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:13.874629  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:28:14.092422  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:14.256836  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:14.368688  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:14.372538  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:28:14.591170  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:14.626197  349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.346342043s)
	W1101 09:28:14.626257  349088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:28:14.626285  349088 retry.go:31] will retry after 11.238635582s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:28:14.756318  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:14.871756  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:14.872388  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:28:15.091417  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:15.253067  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:15.369356  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:15.369487  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:28:15.589679  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:15.751851  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:15.870300  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:15.872929  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:28:16.093454  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:16.256020  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:16.368811  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:16.377654  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:28:16.590726  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:16.751971  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:16.866737  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:16.868073  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:28:17.091111  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:17.253191  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:17.380787  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:17.382875  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:28:17.694211  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:17.753443  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:17.867891  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:28:17.869155  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:18.097913  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:18.256274  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:18.367711  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 09:28:18.367995  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:18.590121  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:18.755701  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:18.868358  349088 kapi.go:107] duration metric: took 33.004649349s to wait for kubernetes.io/minikube-addons=registry ...
	I1101 09:28:18.869253  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:19.092998  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:19.254996  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:19.370208  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:19.590250  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:19.755976  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:19.869582  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:20.090462  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:20.254219  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:20.367836  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:20.590144  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:20.764454  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:20.872908  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:21.099159  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:21.257438  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:21.369740  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:21.592467  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:21.755458  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:21.878038  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:22.102035  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:22.261300  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:22.375543  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:22.599436  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:22.761904  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:22.868483  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:23.091712  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:23.252322  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:23.367921  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:23.595306  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:23.753042  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:23.866820  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:24.089256  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:24.253336  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:24.368497  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:24.589260  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:24.753526  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:24.867561  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:25.094722  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:25.253215  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:25.372605  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:25.591240  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:25.755219  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:25.865684  349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:28:25.866978  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:26.092927  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:26.254479  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:26.415023  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:26.594693  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:26.757331  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:26.869279  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:26.939561  349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.073824208s)
	W1101 09:28:26.939622  349088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:28:26.939646  349088 retry.go:31] will retry after 12.516279473s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:28:27.090209  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:27.252848  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:27.367599  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:27.591575  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:27.753085  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:27.886082  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:28.091552  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:28.256058  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:28.371186  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:28.918878  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:28.919061  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:28.920489  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:29.089665  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:29.252331  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:29.371392  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:29.589081  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:29.751776  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:29.868841  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:30.094135  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:30.253244  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:30.371621  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:30.593244  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:30.755576  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:30.868192  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:31.094229  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:31.253208  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:31.370372  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:31.589601  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:31.752212  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:31.868533  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:32.090517  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:32.251653  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:32.368430  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:32.596828  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:32.759991  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:32.866777  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:33.089980  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:33.257570  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:33.368813  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:33.595521  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:33.753194  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:33.874801  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:34.092441  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:34.254856  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:34.370155  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:34.600736  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:34.760531  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:34.868188  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:35.220929  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:35.258446  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:35.368460  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:35.592608  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:35.755534  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:35.875635  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:36.095721  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:36.256570  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:36.368182  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:36.590310  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:36.754476  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:36.871777  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:37.090067  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:37.253914  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:37.369640  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:37.590648  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:37.752447  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:37.870442  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:38.089769  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:38.253210  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:38.368371  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:38.599316  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:38.754287  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:38.872381  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:39.100825  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:39.254338  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:39.374120  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:39.456188  349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:28:39.593214  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:39.754141  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:39.878222  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:40.089519  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:40.256163  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:40.369367  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:40.590695  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:40.756104  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:40.797907  349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.34166729s)
	W1101 09:28:40.797954  349088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:28:40.797981  349088 retry.go:31] will retry after 15.246599985s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:28:40.876729  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:41.093848  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:41.253770  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:41.369521  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:41.590187  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:41.754175  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:41.870656  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:42.094068  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:42.252584  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:42.368419  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:42.599797  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:42.761338  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:42.882315  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:43.093394  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:43.254577  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:43.367716  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:43.594791  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:43.758565  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:43.870242  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:44.091571  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:44.253654  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:44.371270  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:44.594491  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:44.766584  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:44.871242  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:45.092157  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:45.256141  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:45.373960  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:45.592980  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:45.753566  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:45.867354  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:46.092779  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:46.252268  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:46.372495  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:46.603612  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:47.016509  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:47.018581  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:47.095070  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:47.254032  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:47.366854  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:47.590717  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:47.754654  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:47.866530  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:48.091740  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:48.253582  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:48.368335  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:48.589009  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:48.757155  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:48.867766  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:49.091014  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:49.257198  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:49.370681  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:49.591212  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:49.753115  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:49.870433  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:50.093323  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:50.256487  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:50.369852  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:50.590447  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:50.753200  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:50.871272  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:51.098706  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:51.253373  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:51.370210  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:51.590520  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 09:28:51.769214  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:51.872303  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:52.095093  349088 kapi.go:107] duration metric: took 1m4.510352876s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1101 09:28:52.256098  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:52.367219  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:52.756656  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:52.867404  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:53.274233  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:53.372673  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:53.755711  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:53.868164  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:54.256268  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:54.366966  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:54.753542  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:54.868746  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:55.254072  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:55.371741  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:55.753280  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:55.873961  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:56.044993  349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 09:28:56.264356  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:56.368776  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:56.756276  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:56.872235  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:57.253668  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:57.291524  349088 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.246490325s)
	W1101 09:28:57.291567  349088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:28:57.291590  349088 retry.go:31] will retry after 30.489031451s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 09:28:57.379630  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:57.752917  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:57.869947  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:58.254785  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:58.366676  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:58.753752  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:58.867598  349088 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 09:28:59.262720  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:28:59.370933  349088 kapi.go:107] duration metric: took 1m13.508272969s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1101 09:28:59.753819  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:00.255336  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:00.756647  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:01.258605  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:01.752899  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:02.253665  349088 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 09:29:02.753651  349088 kapi.go:107] duration metric: took 1m12.505231126s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1101 09:29:02.755517  349088 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-610936 cluster.
	I1101 09:29:02.757225  349088 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1101 09:29:02.758516  349088 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1101 09:29:27.782618  349088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1101 09:29:28.510880  349088 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 09:29:28.511027  349088 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1101 09:29:28.512682  349088 out.go:179] * Enabled addons: registry-creds, amd-gpu-device-plugin, default-storageclass, storage-provisioner, ingress-dns, nvidia-device-plugin, cloud-spanner, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1101 09:29:28.513693  349088 addons.go:515] duration metric: took 1m53.181636608s for enable addons: enabled=[registry-creds amd-gpu-device-plugin default-storageclass storage-provisioner ingress-dns nvidia-device-plugin cloud-spanner metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1101 09:29:28.513752  349088 start.go:247] waiting for cluster config update ...
	I1101 09:29:28.513779  349088 start.go:256] writing updated cluster config ...
	I1101 09:29:28.514139  349088 ssh_runner.go:195] Run: rm -f paused
	I1101 09:29:28.521080  349088 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:29:28.524935  349088 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gbqkt" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:29:28.531625  349088 pod_ready.go:94] pod "coredns-66bc5c9577-gbqkt" is "Ready"
	I1101 09:29:28.531658  349088 pod_ready.go:86] duration metric: took 6.69521ms for pod "coredns-66bc5c9577-gbqkt" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:29:28.534259  349088 pod_ready.go:83] waiting for pod "etcd-addons-610936" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:29:28.539687  349088 pod_ready.go:94] pod "etcd-addons-610936" is "Ready"
	I1101 09:29:28.539712  349088 pod_ready.go:86] duration metric: took 5.42952ms for pod "etcd-addons-610936" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:29:28.542400  349088 pod_ready.go:83] waiting for pod "kube-apiserver-addons-610936" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:29:28.547752  349088 pod_ready.go:94] pod "kube-apiserver-addons-610936" is "Ready"
	I1101 09:29:28.547787  349088 pod_ready.go:86] duration metric: took 5.363902ms for pod "kube-apiserver-addons-610936" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:29:28.550457  349088 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-610936" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:29:28.928116  349088 pod_ready.go:94] pod "kube-controller-manager-addons-610936" is "Ready"
	I1101 09:29:28.928150  349088 pod_ready.go:86] duration metric: took 377.66462ms for pod "kube-controller-manager-addons-610936" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:29:29.126544  349088 pod_ready.go:83] waiting for pod "kube-proxy-wm94c" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:29:29.525560  349088 pod_ready.go:94] pod "kube-proxy-wm94c" is "Ready"
	I1101 09:29:29.525595  349088 pod_ready.go:86] duration metric: took 399.020613ms for pod "kube-proxy-wm94c" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:29:29.726321  349088 pod_ready.go:83] waiting for pod "kube-scheduler-addons-610936" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:29:30.124793  349088 pod_ready.go:94] pod "kube-scheduler-addons-610936" is "Ready"
	I1101 09:29:30.124825  349088 pod_ready.go:86] duration metric: took 398.475664ms for pod "kube-scheduler-addons-610936" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:29:30.124837  349088 pod_ready.go:40] duration metric: took 1.603719867s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:29:30.173484  349088 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 09:29:30.175425  349088 out.go:179] * Done! kubectl is now configured to use "addons-610936" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.341631459Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:fafb3f50759fb3ca608566a3f99c714cd2c84822225a83a2784a9703746c5e3f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1761989244899461357,StartedAt:1761989245042825453,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.34.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-610936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab1fd67ea0ce7135fb26c7c0d9556b6b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol
\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/ab1fd67ea0ce7135fb26c7c0d9556b6b/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/ab1fd67ea0ce7135fb26c7c0d9556b6b/containers/kube-scheduler/1aac6d6a,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-scheduler-addons-610936_ab1fd67ea
0ce7135fb26c7c0d9556b6b/kube-scheduler/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=4190db1c-9103-4a90-9937-d3e6ca317889 name=/runtime.v1.RuntimeService/ContainerStatus
	Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.342498683Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:8034149d00598833129fa576f1e2fc17f25643b0868c221ee401136b08eb574f,Verbose:false,}" file="otel-collector/interceptors.go:62" id=cf6957a3-49c2-4a47-8189-4c206d7e8a72 name=/runtime.v1.RuntimeService/ContainerStatus
	Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.342598777Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:8034149d00598833129fa576f1e2fc17f25643b0868c221ee401136b08eb574f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1761989244844233021,StartedAt:1761989244978188634,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.34.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-610936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6f27c5410154d627073969293976eea,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\"
:\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/b6f27c5410154d627073969293976eea/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/b6f27c5410154d627073969293976eea/containers/kube-apiserver/466587fd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRel
abel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-addons-610936_b6f27c5410154d627073969293976eea/kube-apiserver/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=cf6957a3-49c2-4a47-8189-4c206d7e8a72 name=/runtime.v1.RuntimeService/ContainerStatus
	Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.343320485Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:2273a9881f45e98bd51b08079c70bba61edb15367d96d4fc307a139e6efdecc0,Verbose:false,}" file="otel-collector/interceptors.go:62" id=8c1edf82-0c2c-49a0-880e-741422a0414d name=/runtime.v1.RuntimeService/ContainerStatus
	Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.343607204Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:2273a9881f45e98bd51b08079c70bba61edb15367d96d4fc307a139e6efdecc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1761989244801927742,StartedAt:1761989244909071832,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.34.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-610936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2ba9b2dfdbddd8abd459262fdb458f0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":1025
7,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/d2ba9b2dfdbddd8abd459262fdb458f0/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/d2ba9b2dfdbddd8abd459262fdb458f0/containers/kube-controller-manager/bcf83e52,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,
HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-controller-manager-addons-610936_d2ba9b2dfdbddd8abd459262fdb458f0/kube-controller-manager/0.log,Resources:&ContainerResources{Linux:&LinuxContain
erResources{CpuPeriod:100000,CpuQuota:0,CpuShares:204,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=8c1edf82-0c2c-49a0-880e-741422a0414d name=/runtime.v1.RuntimeService/ContainerStatus
	Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.344702441Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:ab4b8cec913e5fbce9ab209d6099961cc18d592862676af287d1934a1852153c,Verbose:false,}" file="otel-collector/interceptors.go:62" id=b9adc70d-5b94-4eaa-81e9-50f167f50344 name=/runtime.v1.RuntimeService/ContainerStatus
	Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.344821140Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:ab4b8cec913e5fbce9ab209d6099961cc18d592862676af287d1934a1852153c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1761989244795687726,StartedAt:1761989244984625690,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.6.4-0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-610936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8fcdd29f8edd4ee30ea406f31d39174,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/c8fcdd29f8edd4ee30ea406f31d39174/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/c8fcdd29f8edd4ee30ea406f31d39174/containers/etcd/9544f3a2,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPA
GATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_etcd-addons-610936_c8fcdd29f8edd4ee30ea406f31d39174/etcd/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=b9adc70d-5b94-4eaa-81e9-50f167f50344 name=/runtime.v1.RuntimeService/ContainerStatus
	Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.364102955Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=522c58fe-6129-408c-9036-9eade60f4930 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.364480129Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=522c58fe-6129-408c-9036-9eade60f4930 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.366811036Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5b891a3d-20c0-4a46-8ad7-070e7488708f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.368427323Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761989551368394216,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588624,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5b891a3d-20c0-4a46-8ad7-070e7488708f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.369158439Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=92f0bdd1-0b08-47cc-88a1-f6bd43871662 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.369218489Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=92f0bdd1-0b08-47cc-88a1-f6bd43871662 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.369522338Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:84b539c862293409e19596e3159d20acac7e0848a2026baa59b6de3a47e64c6b,PodSandboxId:b1b1f64c123d730eab4a2c71844ebfa30cd6160c6c81172e7fc76f3c3b8bf320,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1761989409682555130,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d2369d8f-b848-4d1a-9e8f-e2845ef60291,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd19ad29558e96fd24c9acaa5cd8adb9b3aee6290ecf781a39f59c0546f61318,PodSandboxId:d07d1e33d07942a63b098ca9592423628b138ef0e60f74304224f6d6deda6887,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761989372567427309,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 85f633d6-3539-4443-8d47-46b81caf92be,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e517093db9d4a4b15378a2d18d839a5570e4d1e23236ad4a1cad03529a0236,PodSandboxId:d9d3d351f5293fa83d94acf37673dbef548b72024f30eac7a887ced0d73f9fc1,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761989338922346343,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-kdk56,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d67bbf1f-1e3d-46ae-a872-e71b55056019,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:96ccf439e73c9a5761b8bc1cf8d005b3c28c9ab5a2d04dd717a6827c098973da,PodSandboxId:675df002ba5dc82e9984a12e0b1c3647715f7dccd3b5596cb47ca9521fb35ff3,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302fe
afeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761989322460937835,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-k7flz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ceec19dd-43fa-46ed-9829-b10278e5cf2c,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51396af81c8b50de5cf9ae9400bbce46d91ea910302ef89e6b2de273b3b70e4d,PodSandboxId:c8df62cf4ee2cf0a91f01a6e9ca4b5deb939aa08dc05505b672a50d867ae4a8b,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761989316886731743,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-v2tvv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f0b455ad-6d04-4986-ab33-3edfd0fb7ec9,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:302c7cc67db1757a7b34fe684098b4d3b00a0122c6396a50c0d5451ede4a5f09,PodSandboxId:23ca43c714536d040e2c7270181497a32d858a68082d6a9c9f3c375bdaec718b,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761989311374810116,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-8zz4q,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: b2c07026-df4c-4c8f-a77d-b41864429b49,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be11992f41573c1585e4fa6469d8beb98321d90e3b99f00c3974200e17670788,PodSandboxId:d3e41f66ab23a50f0ce3ee5b382f3c94e31988f7f44a45fb36bb4bd43fb952ee,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-i
ngress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761989291977140257,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b8eca17-1e14-4918-b8d0-991e96bd3770,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cf8a7bf82657d6a2878db78f5553c967593ec366d9c0aa44e3e1f5c71847f6e,PodSandboxId:93cec0513adbfdddcc64cde2839d7fcd15a73d49b0473cc
30474649cd7978e8f,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761989268682328641,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-5pdrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8e4e785-d8f6-4d48-8364-9ae272d16ed4,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5af9f4f51dd4ff474fd24e5769516432460f5a719cc8ada9f3335798427616bd,PodSandboxId:a58fe82
79d7b50de30d47e7fc96602966eb296f1b93c09afd38283fc73ab0b45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761989265539546288,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbbac5d1-8445-4301-918f-9e1633b097d2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcbd5e889ac7c859909ef3521fa201ecdefad68531bc297090e5628fd14802f4,PodSandboxId:89373f63c3094b12749
ee8e00e27d7a139a9b22658990633634e875782019999,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761989257579234210,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gbqkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e62dfed-a46f-4e51-a84d-07825fc7bc70,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b35b53f7950068a92486e0920f2ff6340f6e3caa74dd0d95fbb470ac779d65b6,PodSandboxId:ab59bd977914c3fb69cbd78e09d487830dd66cb5d08696ee521eafc2bcd2d562,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761989256709476488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wm94c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0d02112-bf3c-4352-a3de-02ca7e44f294,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fafb3f50759fb3ca608566a3f99c714cd2c84822225a83a2784a9703746c5e3f,PodSandboxId:07a69256ce35305d0596b690f1e23533fa1d1a20253f88633386f21b1a477eb1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761989244771640063,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-610936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab1fd67ea0ce7135fb26c7c0d9556b6b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports:
[{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8034149d00598833129fa576f1e2fc17f25643b0868c221ee401136b08eb574f,PodSandboxId:d772845cf8c044bb6fbbe42fec2170073a28b9e7ea660adbe1c218ca02be40e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761989244722454011,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-610936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6f27c5410154d627073969293976
eea,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2273a9881f45e98bd51b08079c70bba61edb15367d96d4fc307a139e6efdecc0,PodSandboxId:b5db80dbb4af6a683fee765bab3c67062f47fc1279d8004d661c494c09847944,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761989244705683901,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name:
kube-controller-manager-addons-610936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2ba9b2dfdbddd8abd459262fdb458f0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab4b8cec913e5fbce9ab209d6099961cc18d592862676af287d1934a1852153c,PodSandboxId:a4fe65821e195dbbe96f5d3581049567288b70850a9a3d863af15700b1875598,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:176198924
4710560354,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-610936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8fcdd29f8edd4ee30ea406f31d39174,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=92f0bdd1-0b08-47cc-88a1-f6bd43871662 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.399197425Z" level=debug msg="Content-Type from manifest GET is \"application/vnd.docker.distribution.manifest.list.v2+json\"" file="docker/docker_client.go:964"
	Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.401486544Z" level=debug msg="Using SQLite blob info cache at /var/lib/containers/cache/blob-info-cache-v1.sqlite" file="blobinfocache/default.go:74"
	Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.401660288Z" level=debug msg="Source is a manifest list; copying (only) instance sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86 for current system" file="copy/copy.go:318"
	Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.401771388Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" file="docker/docker_client.go:631"
	Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.417165200Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=566aa186-9466-4ff5-a50e-38908c664969 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.417375924Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=566aa186-9466-4ff5-a50e-38908c664969 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.418998951Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=85d88747-e8d7-4416-8d61-2f4ff8d5beb8 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.420304332Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761989551420274411,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:588624,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=85d88747-e8d7-4416-8d61-2f4ff8d5beb8 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.421313484Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=51c293bd-8e7b-45c4-9bc8-18241784fd5e name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.421404318Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=51c293bd-8e7b-45c4-9bc8-18241784fd5e name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:32:31 addons-610936 crio[819]: time="2025-11-01 09:32:31.421780623Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:84b539c862293409e19596e3159d20acac7e0848a2026baa59b6de3a47e64c6b,PodSandboxId:b1b1f64c123d730eab4a2c71844ebfa30cd6160c6c81172e7fc76f3c3b8bf320,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1761989409682555130,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d2369d8f-b848-4d1a-9e8f-e2845ef60291,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd19ad29558e96fd24c9acaa5cd8adb9b3aee6290ecf781a39f59c0546f61318,PodSandboxId:d07d1e33d07942a63b098ca9592423628b138ef0e60f74304224f6d6deda6887,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761989372567427309,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 85f633d6-3539-4443-8d47-46b81caf92be,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e517093db9d4a4b15378a2d18d839a5570e4d1e23236ad4a1cad03529a0236,PodSandboxId:d9d3d351f5293fa83d94acf37673dbef548b72024f30eac7a887ced0d73f9fc1,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761989338922346343,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-kdk56,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d67bbf1f-1e3d-46ae-a872-e71b55056019,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:96ccf439e73c9a5761b8bc1cf8d005b3c28c9ab5a2d04dd717a6827c098973da,PodSandboxId:675df002ba5dc82e9984a12e0b1c3647715f7dccd3b5596cb47ca9521fb35ff3,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302fe
afeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761989322460937835,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-k7flz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ceec19dd-43fa-46ed-9829-b10278e5cf2c,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51396af81c8b50de5cf9ae9400bbce46d91ea910302ef89e6b2de273b3b70e4d,PodSandboxId:c8df62cf4ee2cf0a91f01a6e9ca4b5deb939aa08dc05505b672a50d867ae4a8b,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761989316886731743,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-v2tvv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f0b455ad-6d04-4986-ab33-3edfd0fb7ec9,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:302c7cc67db1757a7b34fe684098b4d3b00a0122c6396a50c0d5451ede4a5f09,PodSandboxId:23ca43c714536d040e2c7270181497a32d858a68082d6a9c9f3c375bdaec718b,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761989311374810116,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-8zz4q,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: b2c07026-df4c-4c8f-a77d-b41864429b49,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be11992f41573c1585e4fa6469d8beb98321d90e3b99f00c3974200e17670788,PodSandboxId:d3e41f66ab23a50f0ce3ee5b382f3c94e31988f7f44a45fb36bb4bd43fb952ee,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-i
ngress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761989291977140257,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b8eca17-1e14-4918-b8d0-991e96bd3770,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9cf8a7bf82657d6a2878db78f5553c967593ec366d9c0aa44e3e1f5c71847f6e,PodSandboxId:93cec0513adbfdddcc64cde2839d7fcd15a73d49b0473cc
30474649cd7978e8f,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761989268682328641,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-5pdrl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8e4e785-d8f6-4d48-8364-9ae272d16ed4,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5af9f4f51dd4ff474fd24e5769516432460f5a719cc8ada9f3335798427616bd,PodSandboxId:a58fe82
79d7b50de30d47e7fc96602966eb296f1b93c09afd38283fc73ab0b45,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761989265539546288,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbbac5d1-8445-4301-918f-9e1633b097d2,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcbd5e889ac7c859909ef3521fa201ecdefad68531bc297090e5628fd14802f4,PodSandboxId:89373f63c3094b12749
ee8e00e27d7a139a9b22658990633634e875782019999,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761989257579234210,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-gbqkt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e62dfed-a46f-4e51-a84d-07825fc7bc70,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b35b53f7950068a92486e0920f2ff6340f6e3caa74dd0d95fbb470ac779d65b6,PodSandboxId:ab59bd977914c3fb69cbd78e09d487830dd66cb5d08696ee521eafc2bcd2d562,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761989256709476488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wm94c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0d02112-bf3c-4352-a3de-02ca7e44f294,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fafb3f50759fb3ca608566a3f99c714cd2c84822225a83a2784a9703746c5e3f,PodSandboxId:07a69256ce35305d0596b690f1e23533fa1d1a20253f88633386f21b1a477eb1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761989244771640063,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-610936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab1fd67ea0ce7135fb26c7c0d9556b6b,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports:
[{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8034149d00598833129fa576f1e2fc17f25643b0868c221ee401136b08eb574f,PodSandboxId:d772845cf8c044bb6fbbe42fec2170073a28b9e7ea660adbe1c218ca02be40e8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761989244722454011,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-610936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6f27c5410154d627073969293976
eea,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2273a9881f45e98bd51b08079c70bba61edb15367d96d4fc307a139e6efdecc0,PodSandboxId:b5db80dbb4af6a683fee765bab3c67062f47fc1279d8004d661c494c09847944,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761989244705683901,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name:
kube-controller-manager-addons-610936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2ba9b2dfdbddd8abd459262fdb458f0,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab4b8cec913e5fbce9ab209d6099961cc18d592862676af287d1934a1852153c,PodSandboxId:a4fe65821e195dbbe96f5d3581049567288b70850a9a3d863af15700b1875598,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:176198924
4710560354,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-610936,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8fcdd29f8edd4ee30ea406f31d39174,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=51c293bd-8e7b-45c4-9bc8-18241784fd5e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	84b539c862293       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                              2 minutes ago       Running             nginx                     0                   b1b1f64c123d7       nginx
	cd19ad29558e9       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago       Running             busybox                   0                   d07d1e33d0794       busybox
	24e517093db9d       registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd             3 minutes ago       Running             controller                0                   d9d3d351f5293       ingress-nginx-controller-675c5ddd98-kdk56
	96ccf439e73c9       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39   3 minutes ago       Exited              patch                     0                   675df002ba5dc       ingress-nginx-admission-patch-k7flz
	51396af81c8b5       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39   3 minutes ago       Exited              create                    0                   c8df62cf4ee2c       ingress-nginx-admission-create-v2tvv
	302c7cc67db17       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb            4 minutes ago       Running             gadget                    0                   23ca43c714536       gadget-8zz4q
	be11992f41573       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago       Running             minikube-ingress-dns      0                   d3e41f66ab23a       kube-ingress-dns-minikube
	9cf8a7bf82657       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   93cec0513adbf       amd-gpu-device-plugin-5pdrl
	5af9f4f51dd4f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   a58fe8279d7b5       storage-provisioner
	fcbd5e889ac7c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             4 minutes ago       Running             coredns                   0                   89373f63c3094       coredns-66bc5c9577-gbqkt
	b35b53f795006       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                             4 minutes ago       Running             kube-proxy                0                   ab59bd977914c       kube-proxy-wm94c
	fafb3f50759fb       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                             5 minutes ago       Running             kube-scheduler            0                   07a69256ce353       kube-scheduler-addons-610936
	8034149d00598       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                             5 minutes ago       Running             kube-apiserver            0                   d772845cf8c04       kube-apiserver-addons-610936
	ab4b8cec913e5       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             5 minutes ago       Running             etcd                      0                   a4fe65821e195       etcd-addons-610936
	2273a9881f45e       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                             5 minutes ago       Running             kube-controller-manager   0                   b5db80dbb4af6       kube-controller-manager-addons-610936
	
	
	==> coredns [fcbd5e889ac7c859909ef3521fa201ecdefad68531bc297090e5628fd14802f4] <==
	[INFO] 10.244.0.8:58264 - 65116 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000156216s
	[INFO] 10.244.0.8:58264 - 16439 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.00085078s
	[INFO] 10.244.0.8:58264 - 27843 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000164212s
	[INFO] 10.244.0.8:58264 - 55194 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000244965s
	[INFO] 10.244.0.8:58264 - 42297 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000131847s
	[INFO] 10.244.0.8:58264 - 4296 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000249077s
	[INFO] 10.244.0.8:58264 - 25593 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000781471s
	[INFO] 10.244.0.8:52540 - 37455 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000119566s
	[INFO] 10.244.0.8:52540 - 37746 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000205797s
	[INFO] 10.244.0.8:45833 - 44000 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000082299s
	[INFO] 10.244.0.8:45833 - 44270 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000108943s
	[INFO] 10.244.0.8:38702 - 44032 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000157692s
	[INFO] 10.244.0.8:38702 - 44302 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000130207s
	[INFO] 10.244.0.8:37346 - 62163 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000082154s
	[INFO] 10.244.0.8:37346 - 62381 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00011665s
	[INFO] 10.244.0.23:51626 - 18337 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000501441s
	[INFO] 10.244.0.23:46816 - 57875 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000234406s
	[INFO] 10.244.0.23:43533 - 34290 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000324376s
	[INFO] 10.244.0.23:48310 - 43734 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000145612s
	[INFO] 10.244.0.23:56656 - 35606 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000210129s
	[INFO] 10.244.0.23:50951 - 5406 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000117585s
	[INFO] 10.244.0.23:41749 - 29672 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.004549596s
	[INFO] 10.244.0.23:60842 - 5488 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004401615s
	[INFO] 10.244.0.26:58516 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.002262575s
	[INFO] 10.244.0.26:49475 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000268571s
	
	
	==> describe nodes <==
	Name:               addons-610936
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-610936
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d
	                    minikube.k8s.io/name=addons-610936
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_27_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-610936
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:27:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-610936
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:32:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:30:35 +0000   Sat, 01 Nov 2025 09:27:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:30:35 +0000   Sat, 01 Nov 2025 09:27:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:30:35 +0000   Sat, 01 Nov 2025 09:27:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:30:35 +0000   Sat, 01 Nov 2025 09:27:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.81
	  Hostname:    addons-610936
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 067cbdb7aeda471aaaf4ef736820bc12
	  System UUID:                067cbdb7-aeda-471a-aaf4-ef736820bc12
	  Boot ID:                    fec582e8-2949-4206-b135-a486049758e3
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m1s
	  default                     hello-world-app-5d498dc89-d6d67              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  gadget                      gadget-8zz4q                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m48s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-kdk56    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m46s
	  kube-system                 amd-gpu-device-plugin-5pdrl                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 coredns-66bc5c9577-gbqkt                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m55s
	  kube-system                 etcd-addons-610936                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m
	  kube-system                 kube-apiserver-addons-610936                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m
	  kube-system                 kube-controller-manager-addons-610936        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 kube-proxy-wm94c                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  kube-system                 kube-scheduler-addons-610936                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m53s                kube-proxy       
	  Normal  NodeHasSufficientMemory  5m8s (x8 over 5m8s)  kubelet          Node addons-610936 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m8s (x8 over 5m8s)  kubelet          Node addons-610936 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m8s (x7 over 5m8s)  kubelet          Node addons-610936 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m1s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m                   kubelet          Node addons-610936 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m                   kubelet          Node addons-610936 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m                   kubelet          Node addons-610936 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m                   kubelet          Node addons-610936 status is now: NodeReady
	  Normal  RegisteredNode           4m57s                node-controller  Node addons-610936 event: Registered Node addons-610936 in Controller
	
	
	==> dmesg <==
	[  +5.151757] kauditd_printk_skb: 56 callbacks suppressed
	[Nov 1 09:28] kauditd_printk_skb: 5 callbacks suppressed
	[  +7.256636] kauditd_printk_skb: 11 callbacks suppressed
	[  +3.234705] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.240489] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.087707] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.212666] kauditd_printk_skb: 101 callbacks suppressed
	[  +3.528035] kauditd_printk_skb: 76 callbacks suppressed
	[  +3.631994] kauditd_printk_skb: 155 callbacks suppressed
	[  +0.000033] kauditd_printk_skb: 59 callbacks suppressed
	[Nov 1 09:29] kauditd_printk_skb: 68 callbacks suppressed
	[  +0.000581] kauditd_printk_skb: 2 callbacks suppressed
	[ +12.029563] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.978499] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.706433] kauditd_printk_skb: 38 callbacks suppressed
	[Nov 1 09:30] kauditd_printk_skb: 105 callbacks suppressed
	[  +0.133414] kauditd_printk_skb: 216 callbacks suppressed
	[  +3.778704] kauditd_printk_skb: 85 callbacks suppressed
	[  +1.005873] kauditd_printk_skb: 79 callbacks suppressed
	[  +2.925302] kauditd_printk_skb: 28 callbacks suppressed
	[  +8.187217] kauditd_printk_skb: 37 callbacks suppressed
	[  +9.452117] kauditd_printk_skb: 10 callbacks suppressed
	[  +0.000030] kauditd_printk_skb: 10 callbacks suppressed
	[Nov 1 09:31] kauditd_printk_skb: 41 callbacks suppressed
	[Nov 1 09:32] kauditd_printk_skb: 127 callbacks suppressed
	
	
	==> etcd [ab4b8cec913e5fbce9ab209d6099961cc18d592862676af287d1934a1852153c] <==
	{"level":"info","ts":"2025-11-01T09:28:35.208173Z","caller":"traceutil/trace.go:172","msg":"trace[824776101] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1039; }","duration":"131.770903ms","start":"2025-11-01T09:28:35.076395Z","end":"2025-11-01T09:28:35.208166Z","steps":["trace[824776101] 'agreement among raft nodes before linearized reading'  (duration: 131.613863ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:28:35.212732Z","caller":"traceutil/trace.go:172","msg":"trace[595837021] transaction","detail":"{read_only:false; response_revision:1040; number_of_response:1; }","duration":"197.116533ms","start":"2025-11-01T09:28:35.015599Z","end":"2025-11-01T09:28:35.212716Z","steps":["trace[595837021] 'process raft request'  (duration: 192.938286ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:28:47.000848Z","caller":"traceutil/trace.go:172","msg":"trace[593017516] transaction","detail":"{read_only:false; response_revision:1140; number_of_response:1; }","duration":"330.242343ms","start":"2025-11-01T09:28:46.670595Z","end":"2025-11-01T09:28:47.000838Z","steps":["trace[593017516] 'process raft request'  (duration: 330.139883ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:28:47.000819Z","caller":"traceutil/trace.go:172","msg":"trace[445217228] linearizableReadLoop","detail":"{readStateIndex:1169; appliedIndex:1169; }","duration":"311.137191ms","start":"2025-11-01T09:28:46.689579Z","end":"2025-11-01T09:28:47.000716Z","steps":["trace[445217228] 'read index received'  (duration: 311.128923ms)","trace[445217228] 'applied index is now lower than readState.Index'  (duration: 7.139µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T09:28:47.001253Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"311.67117ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:28:47.001292Z","caller":"traceutil/trace.go:172","msg":"trace[560079232] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1140; }","duration":"311.699339ms","start":"2025-11-01T09:28:46.689572Z","end":"2025-11-01T09:28:47.001272Z","steps":["trace[560079232] 'agreement among raft nodes before linearized reading'  (duration: 311.645098ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:28:47.001058Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T09:28:46.670576Z","time spent":"330.398047ms","remote":"127.0.0.1:53022","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1116 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-11-01T09:28:47.002990Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"263.217959ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:28:47.003132Z","caller":"traceutil/trace.go:172","msg":"trace[25317338] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1140; }","duration":"263.370559ms","start":"2025-11-01T09:28:46.739752Z","end":"2025-11-01T09:28:47.003123Z","steps":["trace[25317338] 'agreement among raft nodes before linearized reading'  (duration: 262.300594ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:28:47.003559Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"148.852336ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:28:47.003609Z","caller":"traceutil/trace.go:172","msg":"trace[510491791] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1140; }","duration":"148.907414ms","start":"2025-11-01T09:28:46.854696Z","end":"2025-11-01T09:28:47.003603Z","steps":["trace[510491791] 'agreement among raft nodes before linearized reading'  (duration: 148.830719ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:28:53.254237Z","caller":"traceutil/trace.go:172","msg":"trace[1749761227] linearizableReadLoop","detail":"{readStateIndex:1192; appliedIndex:1192; }","duration":"160.961575ms","start":"2025-11-01T09:28:53.093246Z","end":"2025-11-01T09:28:53.254208Z","steps":["trace[1749761227] 'read index received'  (duration: 160.954681ms)","trace[1749761227] 'applied index is now lower than readState.Index'  (duration: 5.638µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T09:28:53.254686Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"161.414524ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:28:53.256165Z","caller":"traceutil/trace.go:172","msg":"trace[1232810910] range","detail":"{range_begin:/registry/namespaces; range_end:; response_count:0; response_revision:1161; }","duration":"162.719583ms","start":"2025-11-01T09:28:53.093242Z","end":"2025-11-01T09:28:53.255961Z","steps":["trace[1232810910] 'agreement among raft nodes before linearized reading'  (duration: 161.107332ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:28:53.256337Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"140.824707ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csidrivers\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:28:53.256358Z","caller":"traceutil/trace.go:172","msg":"trace[369879715] range","detail":"{range_begin:/registry/csidrivers; range_end:; response_count:0; response_revision:1162; }","duration":"140.917307ms","start":"2025-11-01T09:28:53.115434Z","end":"2025-11-01T09:28:53.256352Z","steps":["trace[369879715] 'agreement among raft nodes before linearized reading'  (duration: 140.6395ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:28:53.257459Z","caller":"traceutil/trace.go:172","msg":"trace[207982295] transaction","detail":"{read_only:false; response_revision:1162; number_of_response:1; }","duration":"190.38688ms","start":"2025-11-01T09:28:53.067062Z","end":"2025-11-01T09:28:53.257449Z","steps":["trace[207982295] 'process raft request'  (duration: 187.501884ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:28:55.183669Z","caller":"traceutil/trace.go:172","msg":"trace[988535762] transaction","detail":"{read_only:false; response_revision:1163; number_of_response:1; }","duration":"173.374046ms","start":"2025-11-01T09:28:55.010281Z","end":"2025-11-01T09:28:55.183655Z","steps":["trace[988535762] 'process raft request'  (duration: 173.257031ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:28:56.172718Z","caller":"traceutil/trace.go:172","msg":"trace[1941284602] linearizableReadLoop","detail":"{readStateIndex:1197; appliedIndex:1197; }","duration":"143.360579ms","start":"2025-11-01T09:28:56.029339Z","end":"2025-11-01T09:28:56.172700Z","steps":["trace[1941284602] 'read index received'  (duration: 143.355248ms)","trace[1941284602] 'applied index is now lower than readState.Index'  (duration: 4.697µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T09:28:56.172921Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"143.52208ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourceclaims\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:28:56.172942Z","caller":"traceutil/trace.go:172","msg":"trace[1245640961] range","detail":"{range_begin:/registry/resourceclaims; range_end:; response_count:0; response_revision:1165; }","duration":"143.601465ms","start":"2025-11-01T09:28:56.029335Z","end":"2025-11-01T09:28:56.172936Z","steps":["trace[1245640961] 'agreement among raft nodes before linearized reading'  (duration: 143.490593ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:28:56.173305Z","caller":"traceutil/trace.go:172","msg":"trace[1452220831] transaction","detail":"{read_only:false; response_revision:1166; number_of_response:1; }","duration":"252.550287ms","start":"2025-11-01T09:28:55.920746Z","end":"2025-11-01T09:28:56.173297Z","steps":["trace[1452220831] 'process raft request'  (duration: 252.418194ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:28:56.175917Z","caller":"traceutil/trace.go:172","msg":"trace[1015778233] transaction","detail":"{read_only:false; response_revision:1167; number_of_response:1; }","duration":"169.552664ms","start":"2025-11-01T09:28:56.006354Z","end":"2025-11-01T09:28:56.175907Z","steps":["trace[1015778233] 'process raft request'  (duration: 169.441663ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:30:16.900173Z","caller":"traceutil/trace.go:172","msg":"trace[850556822] transaction","detail":"{read_only:false; response_revision:1629; number_of_response:1; }","duration":"169.094303ms","start":"2025-11-01T09:30:16.731049Z","end":"2025-11-01T09:30:16.900144Z","steps":["trace[850556822] 'process raft request'  (duration: 169.007224ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:30:57.360637Z","caller":"traceutil/trace.go:172","msg":"trace[644747805] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1785; }","duration":"153.294128ms","start":"2025-11-01T09:30:57.207322Z","end":"2025-11-01T09:30:57.360616Z","steps":["trace[644747805] 'process raft request'  (duration: 153.138128ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:32:31 up 5 min,  0 users,  load average: 0.51, 1.20, 0.67
	Linux addons-610936 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Oct 28 16:58:05 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [8034149d00598833129fa576f1e2fc17f25643b0868c221ee401136b08eb574f] <==
	E1101 09:28:31.557543       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.111.202:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.111.202:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.111.202:443: connect: connection refused" logger="UnhandledError"
	E1101 09:28:31.559949       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.111.202:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.111.202:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.111.202:443: connect: connection refused" logger="UnhandledError"
	I1101 09:28:31.653069       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1101 09:29:39.976820       1 conn.go:339] Error on socket receive: read tcp 192.168.39.81:8443->192.168.39.1:37408: use of closed network connection
	E1101 09:29:40.185521       1 conn.go:339] Error on socket receive: read tcp 192.168.39.81:8443->192.168.39.1:37424: use of closed network connection
	I1101 09:29:49.385430       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.23.18"}
	I1101 09:30:05.715976       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1101 09:30:05.936493       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.108.157.14"}
	E1101 09:30:30.456166       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1101 09:30:32.588266       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1101 09:30:34.466832       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1101 09:31:01.937353       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1101 09:31:01.937427       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1101 09:31:01.972654       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1101 09:31:01.972779       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1101 09:31:02.001849       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1101 09:31:02.002438       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1101 09:31:02.081227       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1101 09:31:02.081277       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1101 09:31:02.098727       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1101 09:31:02.098790       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1101 09:31:03.081820       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W1101 09:31:03.099146       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1101 09:31:03.144443       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I1101 09:32:30.111114       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.105.61"}
	
	
	==> kube-controller-manager [2273a9881f45e98bd51b08079c70bba61edb15367d96d4fc307a139e6efdecc0] <==
	E1101 09:31:07.186269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 09:31:07.590018       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 09:31:07.591145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 09:31:10.031538       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 09:31:10.032674       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 09:31:11.466736       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 09:31:11.467832       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 09:31:12.177986       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 09:31:12.179041       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 09:31:19.833218       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 09:31:19.834976       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 09:31:22.812092       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 09:31:22.813096       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 09:31:23.521960       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 09:31:23.523050       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 09:31:34.706771       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 09:31:34.708057       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 09:31:45.640809       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 09:31:45.642467       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 09:31:48.392733       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 09:31:48.393806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 09:32:16.179991       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 09:32:16.181159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 09:32:23.377202       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 09:32:23.378443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [b35b53f7950068a92486e0920f2ff6340f6e3caa74dd0d95fbb470ac779d65b6] <==
	I1101 09:27:37.511112       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:27:37.615270       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:27:37.615321       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.81"]
	E1101 09:27:37.616479       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:27:37.976427       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1101 09:27:37.976547       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 09:27:37.976580       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:27:38.006057       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:27:38.007317       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:27:38.009994       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:27:38.018945       1 config.go:200] "Starting service config controller"
	I1101 09:27:38.018979       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:27:38.019098       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:27:38.019104       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:27:38.019829       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:27:38.019857       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:27:38.026715       1 config.go:309] "Starting node config controller"
	I1101 09:27:38.026755       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:27:38.026979       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:27:38.122849       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:27:38.122956       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:27:38.122990       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [fafb3f50759fb3ca608566a3f99c714cd2c84822225a83a2784a9703746c5e3f] <==
	E1101 09:27:27.833388       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:27:27.833424       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 09:27:27.833460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:27:27.833548       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:27:27.833569       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:27:27.833582       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:27:27.835128       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:27:27.835221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:27:28.640268       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:27:28.644262       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:27:28.715929       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:27:28.752013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:27:28.862688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1101 09:27:28.914561       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:27:28.927796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:27:28.986511       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:27:29.004123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 09:27:29.025955       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:27:29.065699       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:27:29.205013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:27:29.229452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:27:29.260154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:27:29.262692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:27:29.298722       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1101 09:27:31.824472       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:31:06 addons-610936 kubelet[1511]: I1101 09:31:06.076618    1511 scope.go:117] "RemoveContainer" containerID="88f0f76d37225f7e3d6ef11327954d32424d6d9ff6d03c410f83f8c86cd3f930"
	Nov 01 09:31:06 addons-610936 kubelet[1511]: I1101 09:31:06.907302    1511 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46e366fa-d417-486f-8411-453ae49795b5" path="/var/lib/kubelet/pods/46e366fa-d417-486f-8411-453ae49795b5/volumes"
	Nov 01 09:31:06 addons-610936 kubelet[1511]: I1101 09:31:06.907739    1511 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96036b32-0725-4493-bc98-d16f3b3a0eab" path="/var/lib/kubelet/pods/96036b32-0725-4493-bc98-d16f3b3a0eab/volumes"
	Nov 01 09:31:11 addons-610936 kubelet[1511]: E1101 09:31:11.552156    1511 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761989471551616158  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588624}  inodes_used:{value:201}}"
	Nov 01 09:31:11 addons-610936 kubelet[1511]: E1101 09:31:11.552201    1511 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761989471551616158  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588624}  inodes_used:{value:201}}"
	Nov 01 09:31:21 addons-610936 kubelet[1511]: E1101 09:31:21.555174    1511 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761989481554688101  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588624}  inodes_used:{value:201}}"
	Nov 01 09:31:21 addons-610936 kubelet[1511]: E1101 09:31:21.555226    1511 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761989481554688101  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588624}  inodes_used:{value:201}}"
	Nov 01 09:31:31 addons-610936 kubelet[1511]: E1101 09:31:31.558799    1511 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761989491558333925  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588624}  inodes_used:{value:201}}"
	Nov 01 09:31:31 addons-610936 kubelet[1511]: E1101 09:31:31.558826    1511 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761989491558333925  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588624}  inodes_used:{value:201}}"
	Nov 01 09:31:40 addons-610936 kubelet[1511]: I1101 09:31:40.903571    1511 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-5pdrl" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 09:31:41 addons-610936 kubelet[1511]: E1101 09:31:41.562432    1511 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761989501561975995  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588624}  inodes_used:{value:201}}"
	Nov 01 09:31:41 addons-610936 kubelet[1511]: E1101 09:31:41.562480    1511 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761989501561975995  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588624}  inodes_used:{value:201}}"
	Nov 01 09:31:51 addons-610936 kubelet[1511]: E1101 09:31:51.566421    1511 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761989511565722612  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588624}  inodes_used:{value:201}}"
	Nov 01 09:31:51 addons-610936 kubelet[1511]: E1101 09:31:51.566452    1511 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761989511565722612  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588624}  inodes_used:{value:201}}"
	Nov 01 09:31:57 addons-610936 kubelet[1511]: I1101 09:31:57.902470    1511 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-66bc5c9577-gbqkt" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 09:32:00 addons-610936 kubelet[1511]: I1101 09:32:00.909052    1511 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 09:32:01 addons-610936 kubelet[1511]: E1101 09:32:01.570324    1511 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761989521569677647  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588624}  inodes_used:{value:201}}"
	Nov 01 09:32:01 addons-610936 kubelet[1511]: E1101 09:32:01.570389    1511 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761989521569677647  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588624}  inodes_used:{value:201}}"
	Nov 01 09:32:11 addons-610936 kubelet[1511]: E1101 09:32:11.574263    1511 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761989531573660042  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588624}  inodes_used:{value:201}}"
	Nov 01 09:32:11 addons-610936 kubelet[1511]: E1101 09:32:11.574297    1511 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761989531573660042  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588624}  inodes_used:{value:201}}"
	Nov 01 09:32:21 addons-610936 kubelet[1511]: E1101 09:32:21.577802    1511 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761989541577332484  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588624}  inodes_used:{value:201}}"
	Nov 01 09:32:21 addons-610936 kubelet[1511]: E1101 09:32:21.577832    1511 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761989541577332484  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588624}  inodes_used:{value:201}}"
	Nov 01 09:32:30 addons-610936 kubelet[1511]: I1101 09:32:30.036460    1511 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwn2v\" (UniqueName: \"kubernetes.io/projected/f13847ec-6e8a-4499-8515-1d71d187aeba-kube-api-access-xwn2v\") pod \"hello-world-app-5d498dc89-d6d67\" (UID: \"f13847ec-6e8a-4499-8515-1d71d187aeba\") " pod="default/hello-world-app-5d498dc89-d6d67"
	Nov 01 09:32:31 addons-610936 kubelet[1511]: E1101 09:32:31.582582    1511 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761989551581810569  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588624}  inodes_used:{value:201}}"
	Nov 01 09:32:31 addons-610936 kubelet[1511]: E1101 09:32:31.583152    1511 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761989551581810569  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:588624}  inodes_used:{value:201}}"
	
	
	==> storage-provisioner [5af9f4f51dd4ff474fd24e5769516432460f5a719cc8ada9f3335798427616bd] <==
	W1101 09:32:06.790334       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:32:08.794394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:32:08.801011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:32:10.805503       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:32:10.814192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:32:12.817718       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:32:12.825945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:32:14.829944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:32:14.837382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:32:16.842922       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:32:16.851119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:32:18.855676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:32:18.867002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:32:20.870691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:32:20.876629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:32:22.881276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:32:22.887611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:32:24.891915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:32:24.898720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:32:26.903100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:32:26.914849       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:32:28.920028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:32:28.926735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:32:30.936190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:32:30.945614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-610936 -n addons-610936
helpers_test.go:269: (dbg) Run:  kubectl --context addons-610936 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-d6d67 ingress-nginx-admission-create-v2tvv ingress-nginx-admission-patch-k7flz
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-610936 describe pod hello-world-app-5d498dc89-d6d67 ingress-nginx-admission-create-v2tvv ingress-nginx-admission-patch-k7flz
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-610936 describe pod hello-world-app-5d498dc89-d6d67 ingress-nginx-admission-create-v2tvv ingress-nginx-admission-patch-k7flz: exit status 1 (77.723979ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-d6d67
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-610936/192.168.39.81
	Start Time:       Sat, 01 Nov 2025 09:32:30 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xwn2v (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-xwn2v:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-d6d67 to addons-610936
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	  Normal  Pulled     1s    kubelet            Successfully pulled image "docker.io/kicbase/echo-server:1.0" in 1.376s (1.376s including waiting). Image size: 4944818 bytes.
	  Normal  Created    0s    kubelet            Created container: hello-world-app
	  Normal  Started    0s    kubelet            Started container hello-world-app

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-v2tvv" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-k7flz" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-610936 describe pod hello-world-app-5d498dc89-d6d67 ingress-nginx-admission-create-v2tvv ingress-nginx-admission-patch-k7flz: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-610936 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-610936 addons disable ingress-dns --alsologtostderr -v=1: (1.15313668s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-610936 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-610936 addons disable ingress --alsologtostderr -v=1: (7.845391546s)
--- FAIL: TestAddons/parallel/Ingress (156.43s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (389.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [1e33881d-15d8-4ad9-98e9-42b78d0e3748] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.006702665s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-165244 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-165244 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-165244 get pvc myclaim -o=json
I1101 09:38:17.704602  348518 retry.go:31] will retry after 1.765747946s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:ad3bf7ea-8484-4c03-91b2-4f9e4e5eed0c ResourceVersion:754 Generation:0 CreationTimestamp:2025-11-01 09:38:17 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc0019a09d0 VolumeMode:0xc0019a09e0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-165244 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-165244 apply -f testdata/storage-provisioner/pod.yaml
I1101 09:38:19.767209  348518 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [a6714659-1d6a-4232-b65a-43737704d400] Pending
helpers_test.go:352: "sp-pod" [a6714659-1d6a-4232-b65a-43737704d400] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [a6714659-1d6a-4232-b65a-43737704d400] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.007640597s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-165244 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-165244 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-165244 delete -f testdata/storage-provisioner/pod.yaml: (1.196582917s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-165244 apply -f testdata/storage-provisioner/pod.yaml
I1101 09:38:38.279382  348518 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [852034b9-551b-4941-b228-14e19166927a] Pending
helpers_test.go:352: "sp-pod" [852034b9-551b-4941-b228-14e19166927a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:337: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-165244 -n functional-165244
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-11-01 09:44:38.536195793 +0000 UTC m=+1082.398073686
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-165244 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-165244 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-165244/192.168.39.117
Start Time:       Sat, 01 Nov 2025 09:38:38 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.13
IPs:
IP:  10.244.0.13
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j9zvr (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-j9zvr:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  6m                   default-scheduler  Successfully assigned default/sp-pod to functional-165244
Normal   Pulling    98s (x4 over 6m)     kubelet            Pulling image "docker.io/nginx"
Warning  Failed     56s (x4 over 4m52s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     56s (x4 over 4m52s)  kubelet            Error: ErrImagePull
Normal   BackOff    7s (x8 over 4m52s)   kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     7s (x8 over 4m52s)   kubelet            Error: ImagePullBackOff
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-165244 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-165244 logs sp-pod -n default: exit status 1 (78.910315ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-165244 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-165244 -n functional-165244
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-165244 logs -n 25: (1.766579741s)
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                       ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service        │ functional-165244 service --namespace=default --https --url hello-node                                            │ functional-165244 │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │ 01 Nov 25 09:38 UTC │
	│ service        │ functional-165244 service hello-node --url --format={{.IP}}                                                       │ functional-165244 │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │ 01 Nov 25 09:38 UTC │
	│ ssh            │ functional-165244 ssh findmnt -T /mount-9p | grep 9p                                                              │ functional-165244 │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │ 01 Nov 25 09:38 UTC │
	│ service        │ functional-165244 service hello-node --url                                                                        │ functional-165244 │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │ 01 Nov 25 09:38 UTC │
	│ ssh            │ functional-165244 ssh -- ls -la /mount-9p                                                                         │ functional-165244 │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │ 01 Nov 25 09:38 UTC │
	│ license        │                                                                                                                   │ minikube          │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │ 01 Nov 25 09:38 UTC │
	│ ssh            │ functional-165244 ssh sudo umount -f /mount-9p                                                                    │ functional-165244 │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │                     │
	│ mount          │ -p functional-165244 /tmp/TestFunctionalparallelMountCmdVerifyCleanup529243428/001:/mount2 --alsologtostderr -v=1 │ functional-165244 │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │                     │
	│ mount          │ -p functional-165244 /tmp/TestFunctionalparallelMountCmdVerifyCleanup529243428/001:/mount3 --alsologtostderr -v=1 │ functional-165244 │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │                     │
	│ mount          │ -p functional-165244 /tmp/TestFunctionalparallelMountCmdVerifyCleanup529243428/001:/mount1 --alsologtostderr -v=1 │ functional-165244 │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │                     │
	│ ssh            │ functional-165244 ssh findmnt -T /mount1                                                                          │ functional-165244 │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │                     │
	│ ssh            │ functional-165244 ssh findmnt -T /mount1                                                                          │ functional-165244 │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │ 01 Nov 25 09:38 UTC │
	│ ssh            │ functional-165244 ssh findmnt -T /mount2                                                                          │ functional-165244 │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │ 01 Nov 25 09:38 UTC │
	│ ssh            │ functional-165244 ssh findmnt -T /mount3                                                                          │ functional-165244 │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │ 01 Nov 25 09:38 UTC │
	│ update-context │ functional-165244 update-context --alsologtostderr -v=2                                                           │ functional-165244 │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │ 01 Nov 25 09:38 UTC │
	│ update-context │ functional-165244 update-context --alsologtostderr -v=2                                                           │ functional-165244 │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │ 01 Nov 25 09:38 UTC │
	│ mount          │ -p functional-165244 --kill=true                                                                                  │ functional-165244 │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │                     │
	│ update-context │ functional-165244 update-context --alsologtostderr -v=2                                                           │ functional-165244 │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │ 01 Nov 25 09:38 UTC │
	│ image          │ functional-165244 image ls --format short --alsologtostderr                                                       │ functional-165244 │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │ 01 Nov 25 09:38 UTC │
	│ image          │ functional-165244 image ls --format yaml --alsologtostderr                                                        │ functional-165244 │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │ 01 Nov 25 09:38 UTC │
	│ ssh            │ functional-165244 ssh pgrep buildkitd                                                                             │ functional-165244 │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │                     │
	│ image          │ functional-165244 image ls --format json --alsologtostderr                                                        │ functional-165244 │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │ 01 Nov 25 09:38 UTC │
	│ image          │ functional-165244 image build -t localhost/my-image:functional-165244 testdata/build --alsologtostderr            │ functional-165244 │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │ 01 Nov 25 09:38 UTC │
	│ image          │ functional-165244 image ls --format table --alsologtostderr                                                       │ functional-165244 │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │ 01 Nov 25 09:38 UTC │
	│ image          │ functional-165244 image ls                                                                                        │ functional-165244 │ jenkins │ v1.37.0 │ 01 Nov 25 09:38 UTC │ 01 Nov 25 09:38 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:38:35
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:38:35.822028  354708 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:38:35.822157  354708 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:38:35.822163  354708 out.go:374] Setting ErrFile to fd 2...
	I1101 09:38:35.822167  354708 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:38:35.822387  354708 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-344560/.minikube/bin
	I1101 09:38:35.822886  354708 out.go:368] Setting JSON to false
	I1101 09:38:35.823790  354708 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4864,"bootTime":1761985052,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:38:35.823916  354708 start.go:143] virtualization: kvm guest
	I1101 09:38:35.826421  354708 out.go:179] * [functional-165244] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:38:35.828091  354708 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 09:38:35.828129  354708 notify.go:221] Checking for updates...
	I1101 09:38:35.831020  354708 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:38:35.832478  354708 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-344560/kubeconfig
	I1101 09:38:35.833920  354708 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-344560/.minikube
	I1101 09:38:35.835695  354708 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:38:35.837247  354708 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:38:35.839264  354708 config.go:182] Loaded profile config "functional-165244": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:38:35.840001  354708 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:38:35.872904  354708 out.go:179] * Using the kvm2 driver based on existing profile
	I1101 09:38:35.874341  354708 start.go:309] selected driver: kvm2
	I1101 09:38:35.874362  354708 start.go:930] validating driver "kvm2" against &{Name:functional-165244 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-165244 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.117 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:38:35.874511  354708 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:38:35.875495  354708 cni.go:84] Creating CNI manager for ""
	I1101 09:38:35.875564  354708 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 09:38:35.875623  354708 start.go:353] cluster config:
	{Name:functional-165244 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-165244 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.117 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:38:35.877335  354708 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Nov 01 09:44:39 functional-165244 crio[5480]: time="2025-11-01 09:44:39.409128136Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761990279409102524,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:262153,},InodesUsed:&UInt64Value{Value:120,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3b776c2c-378f-4613-acef-4ed6338d7583 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:44:39 functional-165244 crio[5480]: time="2025-11-01 09:44:39.409947889Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d8f0068b-c328-4eb7-9bff-abcaf99f0732 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:44:39 functional-165244 crio[5480]: time="2025-11-01 09:44:39.410053114Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d8f0068b-c328-4eb7-9bff-abcaf99f0732 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:44:39 functional-165244 crio[5480]: time="2025-11-01 09:44:39.410555953Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:29971aadc416025ca040970558801192a64329ee548cb9c705ba068475d5947a,PodSandboxId:b89fe29d879aca51d627e37fed3a8b12365c780e6825fd195396de48c9366892,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1761989924333523852,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-855c9754f9-r6bt8,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 04c5b40d-e309-4f45-829a-b7fc3e7fb4ad,},Annotations:map[string]string{io.kubernetes.container.has
h: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47a502f1758d24f7f110c1d4180b28b14babbd1ad62fb0fc86c826d8fe58185a,PodSandboxId:2e30784627b87b721bfc9769afed942de66807bc3580be179f6ec30e2d6a466b,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1761989916372269305,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9ccd6612-00eb-48a8-
8ffe-ff3fcd043624,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fea5603d966f5501ddbcd5711fdddea2ffe0ea561f9de0d3870184dbca30473a,PodSandboxId:d3602a3d739a9560b6a04490bcfa926769aef6066a5ab51c26bd46b84cebc87a,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1761989911080664350,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-75c85bcc94-9crl9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c995ee
a8-780c-4933-99fa-8674a74f2ac7,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0dbfbb292a6c10cd17da4531bd35cb5f1e92cd926100d2d1cd12e1a5813ba04,PodSandboxId:675e9ece92b9827c412d17fbabaf300ca5272c1e4212284396333bcbd504caca,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1761989904849063110,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-5bb876957f-4w988,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 724ab0ee-76a4-4632-b6d4-b0c41df4b5b4,},Annotati
ons:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:563c88c65bc67f52e7c916cc7ff74778afb787453269874c3e1def9d5cde55c3,PodSandboxId:fa8b407ea8b2158dbf30188d9bce5ab8aeef1854b740042c172133ecb566f99d,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1761989904602164350,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85df
c575-4r7xh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4ace3f51-0dc1-4472-a385-5288000832f3,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e56e23876041ff6e56285b56c01a9689fcc7b8d076daa53c72de249e29c36dc2,PodSandboxId:2555ed549595334454e41b05e29e2b46a2a7a47a69c8694d7db8cef797a57045,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761989876837758475,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e33881d-15d8-4ad9-98e9-42b78d0e3748,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f11796229ba12b0ec1c1b805e87a56107b2925ef8cb97ddf5fdb48773b89613e,PodSandboxId:2d66a3ca9c30a111d4a150e54aa06ba228b2914ce9d240b37ea405aafdf9bb55,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761989865210061591,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-165244,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 1293e157bf19af343a922497f0117f9e,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac1459a19065fdd5fefbd25f4b2a9bef03e8d406984ace7e06799fabc818595b,PodSandboxId:2555ed549595334454e41b05e29e2b46a2a7a47a69c8694d7db8cef797a57045,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761989865030606821,Labels:map[string]string{io.k
ubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e33881d-15d8-4ad9-98e9-42b78d0e3748,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:701f78b904d376231015456d22a06204a8fdc0b156c6d0c7655fde74e76ce2e3,PodSandboxId:dd2fa3d1652db58362ef096c063ed6509d68116ef838964c43c9459b77196e1e,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761989865047496700,Labels:map[string]string{io.kubernetes.container.name
: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-xc864,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 242c1cca-34c7-42a6-8e48-95924037136a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c48446b37c1b5613b222ee01cab262a5a7419bba71e99ee4b0a8f4944472b6da,PodSandboxId:2d66a3ca9c30a111d4a150e54aa06ba228b2914ce9d240b37ea405aafdf9bb55,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761989864281315997,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-165244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1293e157bf19af343a922497f0117f9e,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c1fb4cabebf9ec43bcb7770a42556698188428e401404fe89cfc30d76c9c997,PodSandboxId:b2578e0
7fcd6ae85056dddd02077b1428db40bc57299418e734804160579345d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761989864057561109,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-165244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d3e01f38020231a859965b57a2c8131,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernet
es.pod.terminationGracePeriod: 30,},},&Container{Id:c7b2979a8ce4e0f48df1d96b84156f49753ae23ad8ed23c522daa9a5d0c0996c,PodSandboxId:4b1eb5f9fa1ff59cbf99a4edf0d2464e12ca7b11138900c335c6cbacbe5aad58,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761989859737180885,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-165244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1e05e1731db5888b1911a6a04c81f44,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:303ab22dbd0c6c4a4fb792f3d37582bdda9a024e3a2495fc0ef2cad6139f6778,PodSandboxId:4c10d0fe46fbd40c7d438b5eb02c5c351c36b94bde9951b89d35368120daa1e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761989859752040376,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-165244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 575d1b49aa2a291a9a13af46c351b088,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol
\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab01021df6885bc1674c819645c932515e1751e19b37225217473300f4a5db31,PodSandboxId:c64ac5c69c67ebc2eb560f4aacca200f07daa5aac48f4f7b0ceb0e9b1149477f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761989859470182573,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5qjj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91943c87-699c-45e6-9f2e-d020ddcce2fa,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.contai
ner.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e8cf30e4c602f25bbf6cf96d1745443f61cfe2898b4ed615bc0a56c3ef62c37,PodSandboxId:b2578e07fcd6ae85056dddd02077b1428db40bc57299418e734804160579345d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761989859344551795,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-165244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d3e01f38020231a859965b57a2c8131,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kube
rnetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aedd537af68630246456458b7b9da4df4d82fe978c732f7a44510ef25560fd73,PodSandboxId:6390be12ccfcf51d25694fb6365038f4771bbc19614d887bba491f46f7e4df38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761989817920948763,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-165244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1
e05e1731db5888b1911a6a04c81f44,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acd349ba3142ba7fb005e0f5bac03a7c29add5944a4f8a008e5c8835e462d272,PodSandboxId:a59e79ad31bea9c46d5a5a5687e91c6bf58fc861c5f80b1844102d406019eb3e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761989817935981229,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-funct
ional-165244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 575d1b49aa2a291a9a13af46c351b088,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5096d8700be537c186d3c61a1582faeb1dbb1caf964ce87f2af07a834289212a,PodSandboxId:7ee3b0706db47738e59c0f1d565f0dc3e4a9e6856e600edfff8e9fc471d324c1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761989811206055855,Labels:map[
string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5qjj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91943c87-699c-45e6-9f2e-d020ddcce2fa,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:139858a5b31c3c3883e472ba9add14a79e99e8d9751ec2c82cf826f36bb54570,PodSandboxId:eae2acee2758eb8eb56b1c8ed5e3920470b4b3d89abd77671f97360ec79b38b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761989811207845520,Labels:map[string]string{io.kubernetes.container
.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-xc864,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 242c1cca-34c7-42a6-8e48-95924037136a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d8f0068b-c328-4eb7-9bff-abcaf99f0732 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:44:39 functional-165244 crio[5480]: time="2025-11-01 09:44:39.463601979Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c2988b1a-1980-49b7-bad8-b02830186096 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:44:39 functional-165244 crio[5480]: time="2025-11-01 09:44:39.464180166Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c2988b1a-1980-49b7-bad8-b02830186096 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:44:39 functional-165244 crio[5480]: time="2025-11-01 09:44:39.466105666Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b05b87ac-d74a-4dc7-ade1-c7b9d410eacf name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:44:39 functional-165244 crio[5480]: time="2025-11-01 09:44:39.467822143Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761990279467684558,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:262153,},InodesUsed:&UInt64Value{Value:120,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b05b87ac-d74a-4dc7-ade1-c7b9d410eacf name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:44:39 functional-165244 crio[5480]: time="2025-11-01 09:44:39.468644758Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b0d480b0-b87b-46ef-8504-3ec3c5d49bae name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:44:39 functional-165244 crio[5480]: time="2025-11-01 09:44:39.468707600Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b0d480b0-b87b-46ef-8504-3ec3c5d49bae name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:44:39 functional-165244 crio[5480]: time="2025-11-01 09:44:39.469110281Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:29971aadc416025ca040970558801192a64329ee548cb9c705ba068475d5947a,PodSandboxId:b89fe29d879aca51d627e37fed3a8b12365c780e6825fd195396de48c9366892,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1761989924333523852,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-855c9754f9-r6bt8,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 04c5b40d-e309-4f45-829a-b7fc3e7fb4ad,},Annotations:map[string]string{io.kubernetes.container.has
h: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47a502f1758d24f7f110c1d4180b28b14babbd1ad62fb0fc86c826d8fe58185a,PodSandboxId:2e30784627b87b721bfc9769afed942de66807bc3580be179f6ec30e2d6a466b,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1761989916372269305,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9ccd6612-00eb-48a8-
8ffe-ff3fcd043624,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fea5603d966f5501ddbcd5711fdddea2ffe0ea561f9de0d3870184dbca30473a,PodSandboxId:d3602a3d739a9560b6a04490bcfa926769aef6066a5ab51c26bd46b84cebc87a,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1761989911080664350,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-75c85bcc94-9crl9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c995ee
a8-780c-4933-99fa-8674a74f2ac7,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0dbfbb292a6c10cd17da4531bd35cb5f1e92cd926100d2d1cd12e1a5813ba04,PodSandboxId:675e9ece92b9827c412d17fbabaf300ca5272c1e4212284396333bcbd504caca,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1761989904849063110,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-5bb876957f-4w988,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 724ab0ee-76a4-4632-b6d4-b0c41df4b5b4,},Annotati
ons:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:563c88c65bc67f52e7c916cc7ff74778afb787453269874c3e1def9d5cde55c3,PodSandboxId:fa8b407ea8b2158dbf30188d9bce5ab8aeef1854b740042c172133ecb566f99d,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1761989904602164350,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85df
c575-4r7xh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4ace3f51-0dc1-4472-a385-5288000832f3,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e56e23876041ff6e56285b56c01a9689fcc7b8d076daa53c72de249e29c36dc2,PodSandboxId:2555ed549595334454e41b05e29e2b46a2a7a47a69c8694d7db8cef797a57045,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761989876837758475,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e33881d-15d8-4ad9-98e9-42b78d0e3748,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f11796229ba12b0ec1c1b805e87a56107b2925ef8cb97ddf5fdb48773b89613e,PodSandboxId:2d66a3ca9c30a111d4a150e54aa06ba228b2914ce9d240b37ea405aafdf9bb55,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761989865210061591,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-165244,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 1293e157bf19af343a922497f0117f9e,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac1459a19065fdd5fefbd25f4b2a9bef03e8d406984ace7e06799fabc818595b,PodSandboxId:2555ed549595334454e41b05e29e2b46a2a7a47a69c8694d7db8cef797a57045,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761989865030606821,Labels:map[string]string{io.k
ubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e33881d-15d8-4ad9-98e9-42b78d0e3748,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:701f78b904d376231015456d22a06204a8fdc0b156c6d0c7655fde74e76ce2e3,PodSandboxId:dd2fa3d1652db58362ef096c063ed6509d68116ef838964c43c9459b77196e1e,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761989865047496700,Labels:map[string]string{io.kubernetes.container.name
: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-xc864,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 242c1cca-34c7-42a6-8e48-95924037136a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c48446b37c1b5613b222ee01cab262a5a7419bba71e99ee4b0a8f4944472b6da,PodSandboxId:2d66a3ca9c30a111d4a150e54aa06ba228b2914ce9d240b37ea405aafdf9bb55,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761989864281315997,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-165244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1293e157bf19af343a922497f0117f9e,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c1fb4cabebf9ec43bcb7770a42556698188428e401404fe89cfc30d76c9c997,PodSandboxId:b2578e0
7fcd6ae85056dddd02077b1428db40bc57299418e734804160579345d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761989864057561109,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-165244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d3e01f38020231a859965b57a2c8131,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernet
es.pod.terminationGracePeriod: 30,},},&Container{Id:c7b2979a8ce4e0f48df1d96b84156f49753ae23ad8ed23c522daa9a5d0c0996c,PodSandboxId:4b1eb5f9fa1ff59cbf99a4edf0d2464e12ca7b11138900c335c6cbacbe5aad58,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761989859737180885,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-165244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1e05e1731db5888b1911a6a04c81f44,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:303ab22dbd0c6c4a4fb792f3d37582bdda9a024e3a2495fc0ef2cad6139f6778,PodSandboxId:4c10d0fe46fbd40c7d438b5eb02c5c351c36b94bde9951b89d35368120daa1e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761989859752040376,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-165244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 575d1b49aa2a291a9a13af46c351b088,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol
\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab01021df6885bc1674c819645c932515e1751e19b37225217473300f4a5db31,PodSandboxId:c64ac5c69c67ebc2eb560f4aacca200f07daa5aac48f4f7b0ceb0e9b1149477f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761989859470182573,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5qjj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91943c87-699c-45e6-9f2e-d020ddcce2fa,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.contai
ner.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e8cf30e4c602f25bbf6cf96d1745443f61cfe2898b4ed615bc0a56c3ef62c37,PodSandboxId:b2578e07fcd6ae85056dddd02077b1428db40bc57299418e734804160579345d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761989859344551795,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-165244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d3e01f38020231a859965b57a2c8131,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kube
rnetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aedd537af68630246456458b7b9da4df4d82fe978c732f7a44510ef25560fd73,PodSandboxId:6390be12ccfcf51d25694fb6365038f4771bbc19614d887bba491f46f7e4df38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761989817920948763,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-165244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1
e05e1731db5888b1911a6a04c81f44,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acd349ba3142ba7fb005e0f5bac03a7c29add5944a4f8a008e5c8835e462d272,PodSandboxId:a59e79ad31bea9c46d5a5a5687e91c6bf58fc861c5f80b1844102d406019eb3e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761989817935981229,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-funct
ional-165244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 575d1b49aa2a291a9a13af46c351b088,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5096d8700be537c186d3c61a1582faeb1dbb1caf964ce87f2af07a834289212a,PodSandboxId:7ee3b0706db47738e59c0f1d565f0dc3e4a9e6856e600edfff8e9fc471d324c1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761989811206055855,Labels:map[
string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5qjj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91943c87-699c-45e6-9f2e-d020ddcce2fa,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:139858a5b31c3c3883e472ba9add14a79e99e8d9751ec2c82cf826f36bb54570,PodSandboxId:eae2acee2758eb8eb56b1c8ed5e3920470b4b3d89abd77671f97360ec79b38b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761989811207845520,Labels:map[string]string{io.kubernetes.container
.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-xc864,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 242c1cca-34c7-42a6-8e48-95924037136a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b0d480b0-b87b-46ef-8504-3ec3c5d49bae name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:44:39 functional-165244 crio[5480]: time="2025-11-01 09:44:39.513853188Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=58b15155-a633-4d02-b1f2-802512473bf1 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:44:39 functional-165244 crio[5480]: time="2025-11-01 09:44:39.513934306Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=58b15155-a633-4d02-b1f2-802512473bf1 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:44:39 functional-165244 crio[5480]: time="2025-11-01 09:44:39.515671647Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d28d8fa9-4a8c-455e-9fea-86d15aefd004 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:44:39 functional-165244 crio[5480]: time="2025-11-01 09:44:39.516623383Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761990279516470758,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:262153,},InodesUsed:&UInt64Value{Value:120,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d28d8fa9-4a8c-455e-9fea-86d15aefd004 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:44:39 functional-165244 crio[5480]: time="2025-11-01 09:44:39.517496396Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=68c0ef2f-cdf0-41ba-9b9c-d19c94d0f071 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:44:39 functional-165244 crio[5480]: time="2025-11-01 09:44:39.517575598Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=68c0ef2f-cdf0-41ba-9b9c-d19c94d0f071 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:44:39 functional-165244 crio[5480]: time="2025-11-01 09:44:39.518566906Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:29971aadc416025ca040970558801192a64329ee548cb9c705ba068475d5947a,PodSandboxId:b89fe29d879aca51d627e37fed3a8b12365c780e6825fd195396de48c9366892,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1761989924333523852,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-855c9754f9-r6bt8,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 04c5b40d-e309-4f45-829a-b7fc3e7fb4ad,},Annotations:map[string]string{io.kubernetes.container.has
h: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47a502f1758d24f7f110c1d4180b28b14babbd1ad62fb0fc86c826d8fe58185a,PodSandboxId:2e30784627b87b721bfc9769afed942de66807bc3580be179f6ec30e2d6a466b,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1761989916372269305,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9ccd6612-00eb-48a8-
8ffe-ff3fcd043624,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fea5603d966f5501ddbcd5711fdddea2ffe0ea561f9de0d3870184dbca30473a,PodSandboxId:d3602a3d739a9560b6a04490bcfa926769aef6066a5ab51c26bd46b84cebc87a,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1761989911080664350,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-75c85bcc94-9crl9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c995ee
a8-780c-4933-99fa-8674a74f2ac7,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0dbfbb292a6c10cd17da4531bd35cb5f1e92cd926100d2d1cd12e1a5813ba04,PodSandboxId:675e9ece92b9827c412d17fbabaf300ca5272c1e4212284396333bcbd504caca,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1761989904849063110,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-5bb876957f-4w988,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 724ab0ee-76a4-4632-b6d4-b0c41df4b5b4,},Annotati
ons:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:563c88c65bc67f52e7c916cc7ff74778afb787453269874c3e1def9d5cde55c3,PodSandboxId:fa8b407ea8b2158dbf30188d9bce5ab8aeef1854b740042c172133ecb566f99d,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1761989904602164350,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85df
c575-4r7xh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4ace3f51-0dc1-4472-a385-5288000832f3,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e56e23876041ff6e56285b56c01a9689fcc7b8d076daa53c72de249e29c36dc2,PodSandboxId:2555ed549595334454e41b05e29e2b46a2a7a47a69c8694d7db8cef797a57045,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761989876837758475,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e33881d-15d8-4ad9-98e9-42b78d0e3748,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f11796229ba12b0ec1c1b805e87a56107b2925ef8cb97ddf5fdb48773b89613e,PodSandboxId:2d66a3ca9c30a111d4a150e54aa06ba228b2914ce9d240b37ea405aafdf9bb55,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761989865210061591,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-165244,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 1293e157bf19af343a922497f0117f9e,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac1459a19065fdd5fefbd25f4b2a9bef03e8d406984ace7e06799fabc818595b,PodSandboxId:2555ed549595334454e41b05e29e2b46a2a7a47a69c8694d7db8cef797a57045,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761989865030606821,Labels:map[string]string{io.k
ubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e33881d-15d8-4ad9-98e9-42b78d0e3748,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:701f78b904d376231015456d22a06204a8fdc0b156c6d0c7655fde74e76ce2e3,PodSandboxId:dd2fa3d1652db58362ef096c063ed6509d68116ef838964c43c9459b77196e1e,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761989865047496700,Labels:map[string]string{io.kubernetes.container.name
: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-xc864,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 242c1cca-34c7-42a6-8e48-95924037136a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c48446b37c1b5613b222ee01cab262a5a7419bba71e99ee4b0a8f4944472b6da,PodSandboxId:2d66a3ca9c30a111d4a150e54aa06ba228b2914ce9d240b37ea405aafdf9bb55,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761989864281315997,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-165244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1293e157bf19af343a922497f0117f9e,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c1fb4cabebf9ec43bcb7770a42556698188428e401404fe89cfc30d76c9c997,PodSandboxId:b2578e0
7fcd6ae85056dddd02077b1428db40bc57299418e734804160579345d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761989864057561109,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-165244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d3e01f38020231a859965b57a2c8131,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernet
es.pod.terminationGracePeriod: 30,},},&Container{Id:c7b2979a8ce4e0f48df1d96b84156f49753ae23ad8ed23c522daa9a5d0c0996c,PodSandboxId:4b1eb5f9fa1ff59cbf99a4edf0d2464e12ca7b11138900c335c6cbacbe5aad58,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761989859737180885,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-165244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1e05e1731db5888b1911a6a04c81f44,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:303ab22dbd0c6c4a4fb792f3d37582bdda9a024e3a2495fc0ef2cad6139f6778,PodSandboxId:4c10d0fe46fbd40c7d438b5eb02c5c351c36b94bde9951b89d35368120daa1e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761989859752040376,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-165244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 575d1b49aa2a291a9a13af46c351b088,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol
\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab01021df6885bc1674c819645c932515e1751e19b37225217473300f4a5db31,PodSandboxId:c64ac5c69c67ebc2eb560f4aacca200f07daa5aac48f4f7b0ceb0e9b1149477f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761989859470182573,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5qjj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91943c87-699c-45e6-9f2e-d020ddcce2fa,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.contai
ner.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e8cf30e4c602f25bbf6cf96d1745443f61cfe2898b4ed615bc0a56c3ef62c37,PodSandboxId:b2578e07fcd6ae85056dddd02077b1428db40bc57299418e734804160579345d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761989859344551795,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-165244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d3e01f38020231a859965b57a2c8131,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kube
rnetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aedd537af68630246456458b7b9da4df4d82fe978c732f7a44510ef25560fd73,PodSandboxId:6390be12ccfcf51d25694fb6365038f4771bbc19614d887bba491f46f7e4df38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761989817920948763,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-165244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1
e05e1731db5888b1911a6a04c81f44,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acd349ba3142ba7fb005e0f5bac03a7c29add5944a4f8a008e5c8835e462d272,PodSandboxId:a59e79ad31bea9c46d5a5a5687e91c6bf58fc861c5f80b1844102d406019eb3e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761989817935981229,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-funct
ional-165244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 575d1b49aa2a291a9a13af46c351b088,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5096d8700be537c186d3c61a1582faeb1dbb1caf964ce87f2af07a834289212a,PodSandboxId:7ee3b0706db47738e59c0f1d565f0dc3e4a9e6856e600edfff8e9fc471d324c1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761989811206055855,Labels:map[
string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5qjj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91943c87-699c-45e6-9f2e-d020ddcce2fa,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:139858a5b31c3c3883e472ba9add14a79e99e8d9751ec2c82cf826f36bb54570,PodSandboxId:eae2acee2758eb8eb56b1c8ed5e3920470b4b3d89abd77671f97360ec79b38b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761989811207845520,Labels:map[string]string{io.kubernetes.container
.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-xc864,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 242c1cca-34c7-42a6-8e48-95924037136a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=68c0ef2f-cdf0-41ba-9b9c-d19c94d0f071 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:44:39 functional-165244 crio[5480]: time="2025-11-01 09:44:39.574576699Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ba906732-b7df-4825-86db-d745d8a754aa name=/runtime.v1.RuntimeService/Version
	Nov 01 09:44:39 functional-165244 crio[5480]: time="2025-11-01 09:44:39.574651391Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ba906732-b7df-4825-86db-d745d8a754aa name=/runtime.v1.RuntimeService/Version
	Nov 01 09:44:39 functional-165244 crio[5480]: time="2025-11-01 09:44:39.576468114Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d7136eef-e0e4-4c68-81b6-a07d3ec5e03d name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:44:39 functional-165244 crio[5480]: time="2025-11-01 09:44:39.577272041Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761990279577244611,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:262153,},InodesUsed:&UInt64Value{Value:120,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d7136eef-e0e4-4c68-81b6-a07d3ec5e03d name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:44:39 functional-165244 crio[5480]: time="2025-11-01 09:44:39.578040011Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=40570b32-77b5-4663-976d-4adcf756b1ac name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:44:39 functional-165244 crio[5480]: time="2025-11-01 09:44:39.578325901Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=40570b32-77b5-4663-976d-4adcf756b1ac name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:44:39 functional-165244 crio[5480]: time="2025-11-01 09:44:39.579296940Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:29971aadc416025ca040970558801192a64329ee548cb9c705ba068475d5947a,PodSandboxId:b89fe29d879aca51d627e37fed3a8b12365c780e6825fd195396de48c9366892,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1761989924333523852,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-855c9754f9-r6bt8,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 04c5b40d-e309-4f45-829a-b7fc3e7fb4ad,},Annotations:map[string]string{io.kubernetes.container.has
h: 823ca662,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47a502f1758d24f7f110c1d4180b28b14babbd1ad62fb0fc86c826d8fe58185a,PodSandboxId:2e30784627b87b721bfc9769afed942de66807bc3580be179f6ec30e2d6a466b,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1761989916372269305,Labels:map[string]string{io.kubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9ccd6612-00eb-48a8-
8ffe-ff3fcd043624,},Annotations:map[string]string{io.kubernetes.container.hash: dbb284d0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fea5603d966f5501ddbcd5711fdddea2ffe0ea561f9de0d3870184dbca30473a,PodSandboxId:d3602a3d739a9560b6a04490bcfa926769aef6066a5ab51c26bd46b84cebc87a,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1761989911080664350,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-75c85bcc94-9crl9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c995ee
a8-780c-4933-99fa-8674a74f2ac7,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0dbfbb292a6c10cd17da4531bd35cb5f1e92cd926100d2d1cd12e1a5813ba04,PodSandboxId:675e9ece92b9827c412d17fbabaf300ca5272c1e4212284396333bcbd504caca,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1761989904849063110,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-5bb876957f-4w988,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 724ab0ee-76a4-4632-b6d4-b0c41df4b5b4,},Annotati
ons:map[string]string{io.kubernetes.container.hash: a60d665,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:563c88c65bc67f52e7c916cc7ff74778afb787453269874c3e1def9d5cde55c3,PodSandboxId:fa8b407ea8b2158dbf30188d9bce5ab8aeef1854b740042c172133ecb566f99d,Metadata:&ContainerMetadata{Name:echo-server,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1761989904602164350,Labels:map[string]string{io.kubernetes.container.name: echo-server,io.kubernetes.pod.name: hello-node-connect-7d85df
c575-4r7xh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4ace3f51-0dc1-4472-a385-5288000832f3,},Annotations:map[string]string{io.kubernetes.container.hash: 3c74da41,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e56e23876041ff6e56285b56c01a9689fcc7b8d076daa53c72de249e29c36dc2,PodSandboxId:2555ed549595334454e41b05e29e2b46a2a7a47a69c8694d7db8cef797a57045,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761989876837758475,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e33881d-15d8-4ad9-98e9-42b78d0e3748,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f11796229ba12b0ec1c1b805e87a56107b2925ef8cb97ddf5fdb48773b89613e,PodSandboxId:2d66a3ca9c30a111d4a150e54aa06ba228b2914ce9d240b37ea405aafdf9bb55,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761989865210061591,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-165244,io.kubernetes.
pod.namespace: kube-system,io.kubernetes.pod.uid: 1293e157bf19af343a922497f0117f9e,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac1459a19065fdd5fefbd25f4b2a9bef03e8d406984ace7e06799fabc818595b,PodSandboxId:2555ed549595334454e41b05e29e2b46a2a7a47a69c8694d7db8cef797a57045,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1761989865030606821,Labels:map[string]string{io.k
ubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e33881d-15d8-4ad9-98e9-42b78d0e3748,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:701f78b904d376231015456d22a06204a8fdc0b156c6d0c7655fde74e76ce2e3,PodSandboxId:dd2fa3d1652db58362ef096c063ed6509d68116ef838964c43c9459b77196e1e,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761989865047496700,Labels:map[string]string{io.kubernetes.container.name
: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-xc864,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 242c1cca-34c7-42a6-8e48-95924037136a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c48446b37c1b5613b222ee01cab262a5a7419bba71e99ee4b0a8f4944472b6da,PodSandboxId:2d66a3ca9c30a111d4a150e54aa06ba228b2914ce9d240b37ea405aafdf9bb55,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761989864281315997,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-165244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1293e157bf19af343a922497f0117f9e,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8441,\"containerPort\":8441,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c1fb4cabebf9ec43bcb7770a42556698188428e401404fe89cfc30d76c9c997,PodSandboxId:b2578e0
7fcd6ae85056dddd02077b1428db40bc57299418e734804160579345d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:4,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761989864057561109,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-165244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d3e01f38020231a859965b57a2c8131,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernet
es.pod.terminationGracePeriod: 30,},},&Container{Id:c7b2979a8ce4e0f48df1d96b84156f49753ae23ad8ed23c522daa9a5d0c0996c,PodSandboxId:4b1eb5f9fa1ff59cbf99a4edf0d2464e12ca7b11138900c335c6cbacbe5aad58,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761989859737180885,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-165244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1e05e1731db5888b1911a6a04c81f44,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:303ab22dbd0c6c4a4fb792f3d37582bdda9a024e3a2495fc0ef2cad6139f6778,PodSandboxId:4c10d0fe46fbd40c7d438b5eb02c5c351c36b94bde9951b89d35368120daa1e8,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761989859752040376,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-165244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 575d1b49aa2a291a9a13af46c351b088,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol
\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab01021df6885bc1674c819645c932515e1751e19b37225217473300f4a5db31,PodSandboxId:c64ac5c69c67ebc2eb560f4aacca200f07daa5aac48f4f7b0ceb0e9b1149477f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761989859470182573,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5qjj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91943c87-699c-45e6-9f2e-d020ddcce2fa,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.contai
ner.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e8cf30e4c602f25bbf6cf96d1745443f61cfe2898b4ed615bc0a56c3ef62c37,PodSandboxId:b2578e07fcd6ae85056dddd02077b1428db40bc57299418e734804160579345d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761989859344551795,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-165244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d3e01f38020231a859965b57a2c8131,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kube
rnetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aedd537af68630246456458b7b9da4df4d82fe978c732f7a44510ef25560fd73,PodSandboxId:6390be12ccfcf51d25694fb6365038f4771bbc19614d887bba491f46f7e4df38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761989817920948763,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-165244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1
e05e1731db5888b1911a6a04c81f44,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:acd349ba3142ba7fb005e0f5bac03a7c29add5944a4f8a008e5c8835e462d272,PodSandboxId:a59e79ad31bea9c46d5a5a5687e91c6bf58fc861c5f80b1844102d406019eb3e,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761989817935981229,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-funct
ional-165244,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 575d1b49aa2a291a9a13af46c351b088,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5096d8700be537c186d3c61a1582faeb1dbb1caf964ce87f2af07a834289212a,PodSandboxId:7ee3b0706db47738e59c0f1d565f0dc3e4a9e6856e600edfff8e9fc471d324c1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761989811206055855,Labels:map[
string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-5qjj7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91943c87-699c-45e6-9f2e-d020ddcce2fa,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:139858a5b31c3c3883e472ba9add14a79e99e8d9751ec2c82cf826f36bb54570,PodSandboxId:eae2acee2758eb8eb56b1c8ed5e3920470b4b3d89abd77671f97360ec79b38b1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761989811207845520,Labels:map[string]string{io.kubernetes.container
.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-xc864,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 242c1cca-34c7-42a6-8e48-95924037136a,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=40570b32-77b5-4663-976d-4adcf756b1ac name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	29971aadc4160       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93   5 minutes ago       Running             kubernetes-dashboard      0                   b89fe29d879ac       kubernetes-dashboard-855c9754f9-r6bt8
	47a502f1758d2       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e        6 minutes ago       Exited              mount-munger              0                   2e30784627b87       busybox-mount
	fea5603d966f5       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6      6 minutes ago       Running             echo-server               0                   d3602a3d739a9       hello-node-75c85bcc94-9crl9
	f0dbfbb292a6c       5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933                                           6 minutes ago       Running             mysql                     0                   675e9ece92b98       mysql-5bb876957f-4w988
	563c88c65bc67       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6      6 minutes ago       Running             echo-server               0                   fa8b407ea8b21       hello-node-connect-7d85dfc575-4r7xh
	e56e23876041f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           6 minutes ago       Running             storage-provisioner       5                   2555ed5495953       storage-provisioner
	f11796229ba12       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           6 minutes ago       Running             kube-apiserver            1                   2d66a3ca9c30a       kube-apiserver-functional-165244
	701f78b904d37       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           6 minutes ago       Running             coredns                   3                   dd2fa3d1652db       coredns-66bc5c9577-xc864
	ac1459a19065f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           6 minutes ago       Exited              storage-provisioner       4                   2555ed5495953       storage-provisioner
	c48446b37c1b5       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           6 minutes ago       Exited              kube-apiserver            0                   2d66a3ca9c30a       kube-apiserver-functional-165244
	1c1fb4cabebf9       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           6 minutes ago       Running             kube-controller-manager   4                   b2578e07fcd6a       kube-controller-manager-functional-165244
	303ab22dbd0c6       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           6 minutes ago       Running             etcd                      3                   4c10d0fe46fbd       etcd-functional-165244
	c7b2979a8ce4e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           6 minutes ago       Running             kube-scheduler            3                   4b1eb5f9fa1ff       kube-scheduler-functional-165244
	ab01021df6885       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           7 minutes ago       Running             kube-proxy                3                   c64ac5c69c67e       kube-proxy-5qjj7
	1e8cf30e4c602       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           7 minutes ago       Exited              kube-controller-manager   3                   b2578e07fcd6a       kube-controller-manager-functional-165244
	acd349ba3142b       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           7 minutes ago       Exited              etcd                      2                   a59e79ad31bea       etcd-functional-165244
	aedd537af6863       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           7 minutes ago       Exited              kube-scheduler            2                   6390be12ccfcf       kube-scheduler-functional-165244
	139858a5b31c3       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           7 minutes ago       Exited              coredns                   2                   eae2acee2758e       coredns-66bc5c9577-xc864
	5096d8700be53       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           7 minutes ago       Exited              kube-proxy                2                   7ee3b0706db47       kube-proxy-5qjj7
	
	
	==> coredns [139858a5b31c3c3883e472ba9add14a79e99e8d9751ec2c82cf826f36bb54570] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39509 - 53830 "HINFO IN 2439205633562988030.6402794608069391470. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.492568137s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [701f78b904d376231015456d22a06204a8fdc0b156c6d0c7655fde74e76ce2e3] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54375 - 9414 "HINFO IN 7429744827474374282.1889522726282437139. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.107839834s
	
	
	==> describe nodes <==
	Name:               functional-165244
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-165244
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d
	                    minikube.k8s.io/name=functional-165244
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_35_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:35:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-165244
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:44:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:44:21 +0000   Sat, 01 Nov 2025 09:35:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:44:21 +0000   Sat, 01 Nov 2025 09:35:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:44:21 +0000   Sat, 01 Nov 2025 09:35:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:44:21 +0000   Sat, 01 Nov 2025 09:35:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.117
	  Hostname:    functional-165244
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001796Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001796Ki
	  pods:               110
	System Info:
	  Machine ID:                 5bfd231afaca45e6ad925dbce41c1195
	  System UUID:                5bfd231a-faca-45e6-ad92-5dbce41c1195
	  Boot ID:                    c8f608cf-4779-4048-9852-d17331420de5
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-9crl9                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m14s
	  default                     hello-node-connect-7d85dfc575-4r7xh           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m28s
	  default                     mysql-5bb876957f-4w988                        600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    6m29s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 coredns-66bc5c9577-xc864                      100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     9m2s
	  kube-system                 etcd-functional-165244                        100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         9m8s
	  kube-system                 kube-apiserver-functional-165244              250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m51s
	  kube-system                 kube-controller-manager-functional-165244     200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m7s
	  kube-system                 kube-proxy-5qjj7                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m2s
	  kube-system                 kube-scheduler-functional-165244              100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m7s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m2s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-8bdjz    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-r6bt8         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%)  700m (35%)
	  memory             682Mi (17%)  870Mi (22%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m1s                   kube-proxy       
	  Normal  Starting                 6m55s                  kube-proxy       
	  Normal  Starting                 7m48s                  kube-proxy       
	  Normal  Starting                 8m9s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m7s                   kubelet          Node functional-165244 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  9m7s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    9m7s                   kubelet          Node functional-165244 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m7s                   kubelet          Node functional-165244 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m7s                   kubelet          Starting kubelet.
	  Normal  NodeReady                9m6s                   kubelet          Node functional-165244 status is now: NodeReady
	  Normal  RegisteredNode           9m3s                   node-controller  Node functional-165244 event: Registered Node functional-165244 in Controller
	  Normal  RegisteredNode           8m6s                   node-controller  Node functional-165244 event: Registered Node functional-165244 in Controller
	  Normal  Starting                 7m42s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m42s (x8 over 7m42s)  kubelet          Node functional-165244 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m42s (x8 over 7m42s)  kubelet          Node functional-165244 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m42s (x7 over 7m42s)  kubelet          Node functional-165244 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m35s                  node-controller  Node functional-165244 event: Registered Node functional-165244 in Controller
	  Normal  Starting                 6m57s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m56s (x8 over 6m56s)  kubelet          Node functional-165244 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m56s (x8 over 6m56s)  kubelet          Node functional-165244 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m56s (x7 over 6m56s)  kubelet          Node functional-165244 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m56s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m48s                  node-controller  Node functional-165244 event: Registered Node functional-165244 in Controller
	
	
	==> dmesg <==
	[  +0.000019] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.086290] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.114660] kauditd_printk_skb: 102 callbacks suppressed
	[  +0.143408] kauditd_printk_skb: 171 callbacks suppressed
	[  +1.080967] kauditd_printk_skb: 18 callbacks suppressed
	[Nov 1 09:36] kauditd_printk_skb: 214 callbacks suppressed
	[  +0.109680] kauditd_printk_skb: 11 callbacks suppressed
	[  +4.600439] kauditd_printk_skb: 313 callbacks suppressed
	[  +3.306438] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.267453] kauditd_printk_skb: 31 callbacks suppressed
	[  +0.130763] kauditd_printk_skb: 26 callbacks suppressed
	[Nov 1 09:37] kauditd_printk_skb: 79 callbacks suppressed
	[  +0.112747] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.502806] kauditd_printk_skb: 78 callbacks suppressed
	[  +1.852707] kauditd_printk_skb: 280 callbacks suppressed
	[  +1.794681] kauditd_printk_skb: 91 callbacks suppressed
	[Nov 1 09:38] kauditd_printk_skb: 5 callbacks suppressed
	[  +2.038114] kauditd_printk_skb: 97 callbacks suppressed
	[  +0.000046] kauditd_printk_skb: 80 callbacks suppressed
	[  +4.595818] kauditd_printk_skb: 120 callbacks suppressed
	[  +1.632723] kauditd_printk_skb: 62 callbacks suppressed
	[  +3.942021] crun[10160]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +1.832595] kauditd_printk_skb: 165 callbacks suppressed
	[Nov 1 09:39] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [303ab22dbd0c6c4a4fb792f3d37582bdda9a024e3a2495fc0ef2cad6139f6778] <==
	{"level":"warn","ts":"2025-11-01T09:37:46.932909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:37:46.948220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:37:46.955999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:37:46.964885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:37:46.973801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:37:46.987554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:37:46.995474Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:37:47.005052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:37:47.083741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38434","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T09:38:17.612405Z","caller":"traceutil/trace.go:172","msg":"trace[939589690] transaction","detail":"{read_only:false; response_revision:753; number_of_response:1; }","duration":"215.143293ms","start":"2025-11-01T09:38:17.397187Z","end":"2025-11-01T09:38:17.612331Z","steps":["trace[939589690] 'process raft request'  (duration: 215.021949ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:38:19.281617Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"137.461646ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:38:19.281709Z","caller":"traceutil/trace.go:172","msg":"trace[1689187708] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:764; }","duration":"137.615806ms","start":"2025-11-01T09:38:19.144084Z","end":"2025-11-01T09:38:19.281699Z","steps":["trace[1689187708] 'range keys from in-memory index tree'  (duration: 136.766861ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:38:22.894738Z","caller":"traceutil/trace.go:172","msg":"trace[1074089656] linearizableReadLoop","detail":"{readStateIndex:856; appliedIndex:856; }","duration":"178.643968ms","start":"2025-11-01T09:38:22.716030Z","end":"2025-11-01T09:38:22.894674Z","steps":["trace[1074089656] 'read index received'  (duration: 178.638585ms)","trace[1074089656] 'applied index is now lower than readState.Index'  (duration: 4.584µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T09:38:22.894896Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"178.845523ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:38:22.894920Z","caller":"traceutil/trace.go:172","msg":"trace[512230809] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:769; }","duration":"178.888218ms","start":"2025-11-01T09:38:22.716025Z","end":"2025-11-01T09:38:22.894913Z","steps":["trace[512230809] 'agreement among raft nodes before linearized reading'  (duration: 178.78713ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:38:22.895015Z","caller":"traceutil/trace.go:172","msg":"trace[747499855] transaction","detail":"{read_only:false; response_revision:770; number_of_response:1; }","duration":"407.431116ms","start":"2025-11-01T09:38:22.487573Z","end":"2025-11-01T09:38:22.895004Z","steps":["trace[747499855] 'process raft request'  (duration: 407.339722ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:38:22.895282Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"138.99254ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:38:22.895307Z","caller":"traceutil/trace.go:172","msg":"trace[1110829447] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:770; }","duration":"139.022544ms","start":"2025-11-01T09:38:22.756279Z","end":"2025-11-01T09:38:22.895301Z","steps":["trace[1110829447] 'agreement among raft nodes before linearized reading'  (duration: 138.973799ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:38:22.895583Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T09:38:22.487554Z","time spent":"407.493327ms","remote":"127.0.0.1:37646","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:769 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-11-01T09:38:26.283644Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.234502ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:38:26.283729Z","caller":"traceutil/trace.go:172","msg":"trace[1686637360] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:812; }","duration":"134.25209ms","start":"2025-11-01T09:38:26.149463Z","end":"2025-11-01T09:38:26.283715Z","steps":["trace[1686637360] 'range keys from in-memory index tree'  (duration: 132.17088ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:38:33.446425Z","caller":"traceutil/trace.go:172","msg":"trace[1242712977] transaction","detail":"{read_only:false; response_revision:831; number_of_response:1; }","duration":"321.905031ms","start":"2025-11-01T09:38:33.124504Z","end":"2025-11-01T09:38:33.446409Z","steps":["trace[1242712977] 'process raft request'  (duration: 321.727316ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:38:33.446578Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T09:38:33.124479Z","time spent":"322.047443ms","remote":"127.0.0.1:37646","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:823 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-11-01T09:38:43.774007Z","caller":"traceutil/trace.go:172","msg":"trace[812588998] transaction","detail":"{read_only:false; response_revision:921; number_of_response:1; }","duration":"230.428556ms","start":"2025-11-01T09:38:43.543550Z","end":"2025-11-01T09:38:43.773979Z","steps":["trace[812588998] 'process raft request'  (duration: 230.339445ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:39:15.690454Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T09:39:15.370330Z","time spent":"320.112321ms","remote":"127.0.0.1:37446","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	
	
	==> etcd [acd349ba3142ba7fb005e0f5bac03a7c29add5944a4f8a008e5c8835e462d272] <==
	{"level":"warn","ts":"2025-11-01T09:37:00.024291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:37:00.048096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:37:00.066668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:37:00.079277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:37:00.091698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:37:00.115571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:37:00.143650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48172","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T09:37:26.725976Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-01T09:37:26.726054Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-165244","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.117:2380"],"advertise-client-urls":["https://192.168.39.117:2379"]}
	{"level":"error","ts":"2025-11-01T09:37:26.726129Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T09:37:26.806538Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T09:37:26.806622Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T09:37:26.806643Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d85ef093c7464643","current-leader-member-id":"d85ef093c7464643"}
	{"level":"info","ts":"2025-11-01T09:37:26.806732Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-01T09:37:26.806742Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-01T09:37:26.806724Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T09:37:26.806799Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T09:37:26.806808Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-01T09:37:26.806847Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.117:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T09:37:26.806854Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.117:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T09:37:26.806859Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.117:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T09:37:26.810523Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.117:2380"}
	{"level":"error","ts":"2025-11-01T09:37:26.810617Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.117:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T09:37:26.810644Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.117:2380"}
	{"level":"info","ts":"2025-11-01T09:37:26.810651Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-165244","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.117:2380"],"advertise-client-urls":["https://192.168.39.117:2379"]}
	
	
	==> kernel <==
	 09:44:40 up 9 min,  0 users,  load average: 0.08, 0.46, 0.37
	Linux functional-165244 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Oct 28 16:58:05 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [c48446b37c1b5613b222ee01cab262a5a7419bba71e99ee4b0a8f4944472b6da] <==
	I1101 09:37:44.643178       1 options.go:263] external host was not specified, using 192.168.39.117
	I1101 09:37:44.646312       1 server.go:150] Version: v1.34.1
	I1101 09:37:44.646415       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1101 09:37:44.653708       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	
	==> kube-apiserver [f11796229ba12b0ec1c1b805e87a56107b2925ef8cb97ddf5fdb48773b89613e] <==
	I1101 09:37:48.010809       1 cache.go:39] Caches are synced for autoregister controller
	I1101 09:37:48.010933       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 09:37:48.010982       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 09:37:48.022766       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:37:48.032737       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:37:48.707148       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:37:49.511683       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:37:49.525178       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:37:49.570433       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 09:37:49.608229       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:37:49.619448       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:37:51.463852       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:37:51.564564       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:38:05.109470       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.96.245.89"}
	I1101 09:38:10.076538       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.108.142.186"}
	I1101 09:38:10.142730       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 09:38:11.709698       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.104.255.115"}
	I1101 09:38:25.760448       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.109.246.30"}
	E1101 09:38:32.305117       1 conn.go:339] Error on socket receive: read tcp 192.168.39.117:8441->192.168.39.1:54368: use of closed network connection
	E1101 09:38:33.159647       1 conn.go:339] Error on socket receive: read tcp 192.168.39.117:8441->192.168.39.1:54410: use of closed network connection
	E1101 09:38:34.683761       1 conn.go:339] Error on socket receive: read tcp 192.168.39.117:8441->192.168.39.1:54432: use of closed network connection
	E1101 09:38:36.911611       1 conn.go:339] Error on socket receive: read tcp 192.168.39.117:8441->192.168.39.1:56280: use of closed network connection
	I1101 09:38:37.040664       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 09:38:37.507824       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.8.193"}
	I1101 09:38:37.542457       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.70.18"}
	
	
	==> kube-controller-manager [1c1fb4cabebf9ec43bcb7770a42556698188428e401404fe89cfc30d76c9c997] <==
	I1101 09:37:51.356950       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:37:51.361451       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 09:37:51.362579       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:37:51.362631       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 09:37:51.362640       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 09:37:51.363011       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 09:37:51.363325       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 09:37:51.368157       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 09:37:51.368186       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 09:37:51.369295       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1101 09:37:51.369466       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 09:37:51.369533       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 09:37:51.375009       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 09:37:51.377336       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 09:37:51.381819       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 09:37:51.381834       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 09:37:51.381942       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 09:37:51.381820       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	E1101 09:38:37.202456       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 09:38:37.232213       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 09:38:37.251787       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 09:38:37.296251       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 09:38:37.296558       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 09:38:37.310437       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 09:38:37.322279       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [1e8cf30e4c602f25bbf6cf96d1745443f61cfe2898b4ed615bc0a56c3ef62c37] <==
	
	
	==> kube-proxy [5096d8700be537c186d3c61a1582faeb1dbb1caf964ce87f2af07a834289212a] <==
	I1101 09:36:51.575031       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:36:51.575064       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.117"]
	E1101 09:36:51.575168       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:36:51.616407       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1101 09:36:51.616525       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 09:36:51.616565       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:36:51.627915       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:36:51.628422       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:36:51.628479       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:36:51.633894       1 config.go:200] "Starting service config controller"
	I1101 09:36:51.633907       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:36:51.633926       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:36:51.633929       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:36:51.633939       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:36:51.633942       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:36:51.636877       1 config.go:309] "Starting node config controller"
	I1101 09:36:51.636910       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:36:51.636917       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:36:51.734076       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:36:51.734123       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:36:51.734166       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	E1101 09:36:54.527242       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/apis/events.k8s.io/v1/namespaces/default/events\": unexpected EOF"
	
	
	==> kube-proxy [ab01021df6885bc1674c819645c932515e1751e19b37225217473300f4a5db31] <==
	E1101 09:37:43.920293       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:37:44.315499       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1101 09:37:44.315582       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 09:37:44.315606       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:37:44.455700       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:37:44.463924       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:37:44.464457       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:37:44.491246       1 config.go:200] "Starting service config controller"
	I1101 09:37:44.491281       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:37:44.491299       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:37:44.491303       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:37:44.491310       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:37:44.491313       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:37:44.503573       1 config.go:309] "Starting node config controller"
	I1101 09:37:44.503788       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:37:44.503797       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:37:44.591880       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:37:44.591937       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 09:37:44.597445       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:37:44.920186       1 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received"
	I1101 09:37:44.920248       1 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received"
	I1101 09:37:44.920269       1 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received"
	
	
	==> kube-scheduler [aedd537af68630246456458b7b9da4df4d82fe978c732f7a44510ef25560fd73] <==
	I1101 09:37:00.300983       1 serving.go:386] Generated self-signed cert in-memory
	I1101 09:37:01.130751       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 09:37:01.130850       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:37:01.144846       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:37:01.145149       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 09:37:01.145191       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 09:37:01.146481       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:37:01.146512       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:37:01.146528       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:37:01.146534       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:37:01.147035       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 09:37:01.246448       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 09:37:01.246732       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:37:01.246754       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:37:26.744973       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:37:26.748720       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:37:26.752801       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1101 09:37:26.761077       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1101 09:37:26.761086       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1101 09:37:26.761132       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c7b2979a8ce4e0f48df1d96b84156f49753ae23ad8ed23c522daa9a5d0c0996c] <==
	I1101 09:37:44.922126       1 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received"
	I1101 09:37:44.922148       1 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received"
	I1101 09:37:44.922167       1 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received"
	I1101 09:37:44.922187       1 reflector.go:568] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received"
	E1101 09:37:47.883209       1 reflector.go:205] "Failed to watch" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:37:47.883588       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:37:47.884042       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:37:47.884592       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 09:37:47.885572       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:37:47.885613       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:37:47.885646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:37:47.885691       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:37:47.888562       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 09:37:47.888657       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:37:47.888736       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:37:47.888816       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:37:47.888890       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:37:47.888977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:37:47.889033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:37:47.889091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:37:47.889203       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:37:47.894598       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:37:47.895061       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1101 09:37:47.895303       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1101 09:37:47.895457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	
	
	==> kubelet <==
	Nov 01 09:43:42 functional-165244 kubelet[6477]: E1101 09:43:42.157073    6477 kuberuntime_manager.go:1449] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(852034b9-551b-4941-b228-14e19166927a): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Nov 01 09:43:42 functional-165244 kubelet[6477]: E1101 09:43:42.157105    6477 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="852034b9-551b-4941-b228-14e19166927a"
	Nov 01 09:43:43 functional-165244 kubelet[6477]: E1101 09:43:43.062340    6477 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod242c1cca-34c7-42a6-8e48-95924037136a/crio-eae2acee2758eb8eb56b1c8ed5e3920470b4b3d89abd77671f97360ec79b38b1: Error finding container eae2acee2758eb8eb56b1c8ed5e3920470b4b3d89abd77671f97360ec79b38b1: Status 404 returned error can't find the container with id eae2acee2758eb8eb56b1c8ed5e3920470b4b3d89abd77671f97360ec79b38b1
	Nov 01 09:43:43 functional-165244 kubelet[6477]: E1101 09:43:43.062699    6477 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod91943c87-699c-45e6-9f2e-d020ddcce2fa/crio-7ee3b0706db47738e59c0f1d565f0dc3e4a9e6856e600edfff8e9fc471d324c1: Error finding container 7ee3b0706db47738e59c0f1d565f0dc3e4a9e6856e600edfff8e9fc471d324c1: Status 404 returned error can't find the container with id 7ee3b0706db47738e59c0f1d565f0dc3e4a9e6856e600edfff8e9fc471d324c1
	Nov 01 09:43:43 functional-165244 kubelet[6477]: E1101 09:43:43.063518    6477 manager.go:1116] Failed to create existing container: /kubepods/burstable/pode1e05e1731db5888b1911a6a04c81f44/crio-6390be12ccfcf51d25694fb6365038f4771bbc19614d887bba491f46f7e4df38: Error finding container 6390be12ccfcf51d25694fb6365038f4771bbc19614d887bba491f46f7e4df38: Status 404 returned error can't find the container with id 6390be12ccfcf51d25694fb6365038f4771bbc19614d887bba491f46f7e4df38
	Nov 01 09:43:43 functional-165244 kubelet[6477]: E1101 09:43:43.063767    6477 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod575d1b49aa2a291a9a13af46c351b088/crio-a59e79ad31bea9c46d5a5a5687e91c6bf58fc861c5f80b1844102d406019eb3e: Error finding container a59e79ad31bea9c46d5a5a5687e91c6bf58fc861c5f80b1844102d406019eb3e: Status 404 returned error can't find the container with id a59e79ad31bea9c46d5a5a5687e91c6bf58fc861c5f80b1844102d406019eb3e
	Nov 01 09:43:43 functional-165244 kubelet[6477]: E1101 09:43:43.225917    6477 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761990223225542334  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:262153}  inodes_used:{value:120}}"
	Nov 01 09:43:43 functional-165244 kubelet[6477]: E1101 09:43:43.225939    6477 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761990223225542334  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:262153}  inodes_used:{value:120}}"
	Nov 01 09:43:52 functional-165244 kubelet[6477]: E1101 09:43:52.814959    6477 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-8bdjz" podUID="0decfee6-3688-46dc-9c50-58d2e2014a33"
	Nov 01 09:43:53 functional-165244 kubelet[6477]: E1101 09:43:53.228215    6477 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761990233227864693  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:262153}  inodes_used:{value:120}}"
	Nov 01 09:43:53 functional-165244 kubelet[6477]: E1101 09:43:53.228237    6477 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761990233227864693  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:262153}  inodes_used:{value:120}}"
	Nov 01 09:43:54 functional-165244 kubelet[6477]: E1101 09:43:54.813708    6477 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="852034b9-551b-4941-b228-14e19166927a"
	Nov 01 09:44:03 functional-165244 kubelet[6477]: E1101 09:44:03.233605    6477 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761990243230648960  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:262153}  inodes_used:{value:120}}"
	Nov 01 09:44:03 functional-165244 kubelet[6477]: E1101 09:44:03.233628    6477 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761990243230648960  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:262153}  inodes_used:{value:120}}"
	Nov 01 09:44:07 functional-165244 kubelet[6477]: E1101 09:44:07.813658    6477 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="852034b9-551b-4941-b228-14e19166927a"
	Nov 01 09:44:07 functional-165244 kubelet[6477]: E1101 09:44:07.816238    6477 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-8bdjz" podUID="0decfee6-3688-46dc-9c50-58d2e2014a33"
	Nov 01 09:44:13 functional-165244 kubelet[6477]: E1101 09:44:13.236111    6477 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761990253235429069  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:262153}  inodes_used:{value:120}}"
	Nov 01 09:44:13 functional-165244 kubelet[6477]: E1101 09:44:13.236154    6477 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761990253235429069  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:262153}  inodes_used:{value:120}}"
	Nov 01 09:44:18 functional-165244 kubelet[6477]: E1101 09:44:18.817878    6477 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="852034b9-551b-4941-b228-14e19166927a"
	Nov 01 09:44:18 functional-165244 kubelet[6477]: E1101 09:44:18.820323    6477 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: reading manifest sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c in docker.io/kubernetesui/metrics-scraper: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-8bdjz" podUID="0decfee6-3688-46dc-9c50-58d2e2014a33"
	Nov 01 09:44:23 functional-165244 kubelet[6477]: E1101 09:44:23.239978    6477 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761990263239207023  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:262153}  inodes_used:{value:120}}"
	Nov 01 09:44:23 functional-165244 kubelet[6477]: E1101 09:44:23.240145    6477 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761990263239207023  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:262153}  inodes_used:{value:120}}"
	Nov 01 09:44:31 functional-165244 kubelet[6477]: E1101 09:44:31.812972    6477 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="852034b9-551b-4941-b228-14e19166927a"
	Nov 01 09:44:33 functional-165244 kubelet[6477]: E1101 09:44:33.241858    6477 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761990273241529515  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:262153}  inodes_used:{value:120}}"
	Nov 01 09:44:33 functional-165244 kubelet[6477]: E1101 09:44:33.241882    6477 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761990273241529515  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:262153}  inodes_used:{value:120}}"
	
	
	==> kubernetes-dashboard [29971aadc416025ca040970558801192a64329ee548cb9c705ba068475d5947a] <==
	2025/11/01 09:38:44 Using namespace: kubernetes-dashboard
	2025/11/01 09:38:44 Using in-cluster config to connect to apiserver
	2025/11/01 09:38:44 Using secret token for csrf signing
	2025/11/01 09:38:44 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 09:38:44 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 09:38:44 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 09:38:44 Generating JWE encryption key
	2025/11/01 09:38:44 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 09:38:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 09:38:44 Initializing JWE encryption key from synchronized object
	2025/11/01 09:38:44 Creating in-cluster Sidecar client
	2025/11/01 09:38:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:38:44 Serving insecurely on HTTP port: 9090
	2025/11/01 09:39:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:39:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:40:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:40:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:41:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:41:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:42:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:42:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:43:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:43:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:44:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:38:44 Starting overwatch
	
	
	==> storage-provisioner [ac1459a19065fdd5fefbd25f4b2a9bef03e8d406984ace7e06799fabc818595b] <==
	I1101 09:37:45.386468       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 09:37:45.392527       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [e56e23876041ff6e56285b56c01a9689fcc7b8d076daa53c72de249e29c36dc2] <==
	W1101 09:44:15.662465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:17.666892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:17.671834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:19.675716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:19.680926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:21.684996       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:21.690713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:23.694243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:23.705276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:25.709123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:25.714879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:27.719608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:27.730610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:29.734871       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:29.740567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:31.744676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:31.750240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:33.754207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:33.763533       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:35.767994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:35.773504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:37.777814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:37.789689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:39.795997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:39.802550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-165244 -n functional-165244
helpers_test.go:269: (dbg) Run:  kubectl --context functional-165244 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount sp-pod dashboard-metrics-scraper-77bf4d6c4c-8bdjz
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-165244 describe pod busybox-mount sp-pod dashboard-metrics-scraper-77bf4d6c4c-8bdjz
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-165244 describe pod busybox-mount sp-pod dashboard-metrics-scraper-77bf4d6c4c-8bdjz: exit status 1 (107.998207ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-165244/192.168.39.117
	Start Time:       Sat, 01 Nov 2025 09:38:34 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  mount-munger:
	    Container ID:  cri-o://47a502f1758d24f7f110c1d4180b28b14babbd1ad62fb0fc86c826d8fe58185a
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 01 Nov 2025 09:38:36 +0000
	      Finished:     Sat, 01 Nov 2025 09:38:36 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jzhf9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-jzhf9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  6m6s  default-scheduler  Successfully assigned default/busybox-mount to functional-165244
	  Normal  Pulling    6m6s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     6m5s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.283s (1.283s including waiting). Image size: 4631262 bytes.
	  Normal  Created    6m5s  kubelet            Created container: mount-munger
	  Normal  Started    6m5s  kubelet            Started container mount-munger
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-165244/192.168.39.117
	Start Time:       Sat, 01 Nov 2025 09:38:38 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.13
	IPs:
	  IP:  10.244.0.13
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j9zvr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-j9zvr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m2s                 default-scheduler  Successfully assigned default/sp-pod to functional-165244
	  Normal   Pulling    101s (x4 over 6m3s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     59s (x4 over 4m55s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     59s (x4 over 4m55s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    10s (x8 over 4m55s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     10s (x8 over 4m55s)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-8bdjz" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-165244 describe pod busybox-mount sp-pod dashboard-metrics-scraper-77bf4d6c4c-8bdjz: exit status 1
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (389.96s)

                                                
                                    
x
+
TestPreload (128.58s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-753124 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
E1101 10:24:13.934322  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:24:30.856098  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-753124 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (1m10.324452072s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-753124 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-753124 image pull gcr.io/k8s-minikube/busybox: (1.405641313s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-753124
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-753124: (7.140110513s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-753124 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-753124 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (46.708797465s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-753124 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-11-01 10:26:16.79564736 +0000 UTC m=+3580.657525271
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-753124 -n test-preload-753124
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-753124 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-753124 logs -n 25: (1.143829891s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-629778 ssh -n multinode-629778-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-629778     │ jenkins │ v1.37.0 │ 01 Nov 25 10:12 UTC │ 01 Nov 25 10:12 UTC │
	│ ssh     │ multinode-629778 ssh -n multinode-629778 sudo cat /home/docker/cp-test_multinode-629778-m03_multinode-629778.txt                                          │ multinode-629778     │ jenkins │ v1.37.0 │ 01 Nov 25 10:12 UTC │ 01 Nov 25 10:12 UTC │
	│ cp      │ multinode-629778 cp multinode-629778-m03:/home/docker/cp-test.txt multinode-629778-m02:/home/docker/cp-test_multinode-629778-m03_multinode-629778-m02.txt │ multinode-629778     │ jenkins │ v1.37.0 │ 01 Nov 25 10:12 UTC │ 01 Nov 25 10:12 UTC │
	│ ssh     │ multinode-629778 ssh -n multinode-629778-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-629778     │ jenkins │ v1.37.0 │ 01 Nov 25 10:12 UTC │ 01 Nov 25 10:12 UTC │
	│ ssh     │ multinode-629778 ssh -n multinode-629778-m02 sudo cat /home/docker/cp-test_multinode-629778-m03_multinode-629778-m02.txt                                  │ multinode-629778     │ jenkins │ v1.37.0 │ 01 Nov 25 10:12 UTC │ 01 Nov 25 10:12 UTC │
	│ node    │ multinode-629778 node stop m03                                                                                                                            │ multinode-629778     │ jenkins │ v1.37.0 │ 01 Nov 25 10:12 UTC │ 01 Nov 25 10:12 UTC │
	│ node    │ multinode-629778 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-629778     │ jenkins │ v1.37.0 │ 01 Nov 25 10:12 UTC │ 01 Nov 25 10:13 UTC │
	│ node    │ list -p multinode-629778                                                                                                                                  │ multinode-629778     │ jenkins │ v1.37.0 │ 01 Nov 25 10:13 UTC │                     │
	│ stop    │ -p multinode-629778                                                                                                                                       │ multinode-629778     │ jenkins │ v1.37.0 │ 01 Nov 25 10:13 UTC │ 01 Nov 25 10:16 UTC │
	│ start   │ -p multinode-629778 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-629778     │ jenkins │ v1.37.0 │ 01 Nov 25 10:16 UTC │ 01 Nov 25 10:18 UTC │
	│ node    │ list -p multinode-629778                                                                                                                                  │ multinode-629778     │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │                     │
	│ node    │ multinode-629778 node delete m03                                                                                                                          │ multinode-629778     │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │ 01 Nov 25 10:18 UTC │
	│ stop    │ multinode-629778 stop                                                                                                                                     │ multinode-629778     │ jenkins │ v1.37.0 │ 01 Nov 25 10:18 UTC │ 01 Nov 25 10:21 UTC │
	│ start   │ -p multinode-629778 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-629778     │ jenkins │ v1.37.0 │ 01 Nov 25 10:21 UTC │ 01 Nov 25 10:23 UTC │
	│ node    │ list -p multinode-629778                                                                                                                                  │ multinode-629778     │ jenkins │ v1.37.0 │ 01 Nov 25 10:23 UTC │                     │
	│ start   │ -p multinode-629778-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-629778-m02 │ jenkins │ v1.37.0 │ 01 Nov 25 10:23 UTC │                     │
	│ start   │ -p multinode-629778-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-629778-m03 │ jenkins │ v1.37.0 │ 01 Nov 25 10:23 UTC │ 01 Nov 25 10:24 UTC │
	│ node    │ add -p multinode-629778                                                                                                                                   │ multinode-629778     │ jenkins │ v1.37.0 │ 01 Nov 25 10:24 UTC │                     │
	│ delete  │ -p multinode-629778-m03                                                                                                                                   │ multinode-629778-m03 │ jenkins │ v1.37.0 │ 01 Nov 25 10:24 UTC │ 01 Nov 25 10:24 UTC │
	│ delete  │ -p multinode-629778                                                                                                                                       │ multinode-629778     │ jenkins │ v1.37.0 │ 01 Nov 25 10:24 UTC │ 01 Nov 25 10:24 UTC │
	│ start   │ -p test-preload-753124 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0   │ test-preload-753124  │ jenkins │ v1.37.0 │ 01 Nov 25 10:24 UTC │ 01 Nov 25 10:25 UTC │
	│ image   │ test-preload-753124 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-753124  │ jenkins │ v1.37.0 │ 01 Nov 25 10:25 UTC │ 01 Nov 25 10:25 UTC │
	│ stop    │ -p test-preload-753124                                                                                                                                    │ test-preload-753124  │ jenkins │ v1.37.0 │ 01 Nov 25 10:25 UTC │ 01 Nov 25 10:25 UTC │
	│ start   │ -p test-preload-753124 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                           │ test-preload-753124  │ jenkins │ v1.37.0 │ 01 Nov 25 10:25 UTC │ 01 Nov 25 10:26 UTC │
	│ image   │ test-preload-753124 image list                                                                                                                            │ test-preload-753124  │ jenkins │ v1.37.0 │ 01 Nov 25 10:26 UTC │ 01 Nov 25 10:26 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:25:29
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:25:29.937996  373038 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:25:29.938276  373038 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:25:29.938287  373038 out.go:374] Setting ErrFile to fd 2...
	I1101 10:25:29.938293  373038 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:25:29.938523  373038 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-344560/.minikube/bin
	I1101 10:25:29.939025  373038 out.go:368] Setting JSON to false
	I1101 10:25:29.939985  373038 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7678,"bootTime":1761985052,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:25:29.940086  373038 start.go:143] virtualization: kvm guest
	I1101 10:25:29.942293  373038 out.go:179] * [test-preload-753124] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:25:29.944071  373038 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 10:25:29.944067  373038 notify.go:221] Checking for updates...
	I1101 10:25:29.946734  373038 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:25:29.948277  373038 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-344560/kubeconfig
	I1101 10:25:29.949675  373038 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-344560/.minikube
	I1101 10:25:29.951175  373038 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:25:29.952873  373038 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:25:29.955067  373038 config.go:182] Loaded profile config "test-preload-753124": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1101 10:25:29.956927  373038 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1101 10:25:29.960949  373038 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:25:29.996076  373038 out.go:179] * Using the kvm2 driver based on existing profile
	I1101 10:25:29.997479  373038 start.go:309] selected driver: kvm2
	I1101 10:25:29.997501  373038 start.go:930] validating driver "kvm2" against &{Name:test-preload-753124 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-753124 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:25:29.997622  373038 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:25:29.998655  373038 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:25:29.998697  373038 cni.go:84] Creating CNI manager for ""
	I1101 10:25:29.998744  373038 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 10:25:29.998791  373038 start.go:353] cluster config:
	{Name:test-preload-753124 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-753124 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:25:29.998908  373038 iso.go:125] acquiring lock: {Name:mkc74493fbbc2007c645c4ed6349cf76e7fb2185 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:25:30.000433  373038 out.go:179] * Starting "test-preload-753124" primary control-plane node in "test-preload-753124" cluster
	I1101 10:25:30.001841  373038 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1101 10:25:30.030548  373038 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1101 10:25:30.030587  373038 cache.go:59] Caching tarball of preloaded images
	I1101 10:25:30.030784  373038 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1101 10:25:30.032670  373038 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1101 10:25:30.033834  373038 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1101 10:25:30.067470  373038 preload.go:290] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1101 10:25:30.067521  373038 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21832-344560/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1101 10:25:32.390766  373038 cache.go:62] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1101 10:25:32.390942  373038 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/test-preload-753124/config.json ...
	I1101 10:25:32.391197  373038 start.go:360] acquireMachinesLock for test-preload-753124: {Name:mkd221a68334bc82c567a9a06c8563af1e1c38bd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 10:25:32.391271  373038 start.go:364] duration metric: took 48.64µs to acquireMachinesLock for "test-preload-753124"
	I1101 10:25:32.391296  373038 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:25:32.391307  373038 fix.go:54] fixHost starting: 
	I1101 10:25:32.393429  373038 fix.go:112] recreateIfNeeded on test-preload-753124: state=Stopped err=<nil>
	W1101 10:25:32.393469  373038 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 10:25:32.395448  373038 out.go:252] * Restarting existing kvm2 VM for "test-preload-753124" ...
	I1101 10:25:32.395503  373038 main.go:143] libmachine: starting domain...
	I1101 10:25:32.395515  373038 main.go:143] libmachine: ensuring networks are active...
	I1101 10:25:32.396389  373038 main.go:143] libmachine: Ensuring network default is active
	I1101 10:25:32.396963  373038 main.go:143] libmachine: Ensuring network mk-test-preload-753124 is active
	I1101 10:25:32.397478  373038 main.go:143] libmachine: getting domain XML...
	I1101 10:25:32.398932  373038 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-753124</name>
	  <uuid>6cd17a31-56b1-4f0c-9e40-02494182f555</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21832-344560/.minikube/machines/test-preload-753124/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21832-344560/.minikube/machines/test-preload-753124/test-preload-753124.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:41:14:81'/>
	      <source network='mk-test-preload-753124'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:59:3f:ed'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1101 10:25:33.790189  373038 main.go:143] libmachine: waiting for domain to start...
	I1101 10:25:33.791699  373038 main.go:143] libmachine: domain is now running
	I1101 10:25:33.791719  373038 main.go:143] libmachine: waiting for IP...
	I1101 10:25:33.792559  373038 main.go:143] libmachine: domain test-preload-753124 has defined MAC address 52:54:00:41:14:81 in network mk-test-preload-753124
	I1101 10:25:33.793236  373038 main.go:143] libmachine: domain test-preload-753124 has current primary IP address 192.168.39.18 and MAC address 52:54:00:41:14:81 in network mk-test-preload-753124
	I1101 10:25:33.793249  373038 main.go:143] libmachine: found domain IP: 192.168.39.18
	I1101 10:25:33.793255  373038 main.go:143] libmachine: reserving static IP address...
	I1101 10:25:33.793706  373038 main.go:143] libmachine: found host DHCP lease matching {name: "test-preload-753124", mac: "52:54:00:41:14:81", ip: "192.168.39.18"} in network mk-test-preload-753124: {Iface:virbr1 ExpiryTime:2025-11-01 11:24:27 +0000 UTC Type:0 Mac:52:54:00:41:14:81 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:test-preload-753124 Clientid:01:52:54:00:41:14:81}
	I1101 10:25:33.793735  373038 main.go:143] libmachine: skip adding static IP to network mk-test-preload-753124 - found existing host DHCP lease matching {name: "test-preload-753124", mac: "52:54:00:41:14:81", ip: "192.168.39.18"}
	I1101 10:25:33.793744  373038 main.go:143] libmachine: reserved static IP address 192.168.39.18 for domain test-preload-753124
	I1101 10:25:33.793758  373038 main.go:143] libmachine: waiting for SSH...
	I1101 10:25:33.793767  373038 main.go:143] libmachine: Getting to WaitForSSH function...
	I1101 10:25:33.796381  373038 main.go:143] libmachine: domain test-preload-753124 has defined MAC address 52:54:00:41:14:81 in network mk-test-preload-753124
	I1101 10:25:33.796755  373038 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:14:81", ip: ""} in network mk-test-preload-753124: {Iface:virbr1 ExpiryTime:2025-11-01 11:24:27 +0000 UTC Type:0 Mac:52:54:00:41:14:81 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:test-preload-753124 Clientid:01:52:54:00:41:14:81}
	I1101 10:25:33.796778  373038 main.go:143] libmachine: domain test-preload-753124 has defined IP address 192.168.39.18 and MAC address 52:54:00:41:14:81 in network mk-test-preload-753124
	I1101 10:25:33.797041  373038 main.go:143] libmachine: Using SSH client type: native
	I1101 10:25:33.798271  373038 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I1101 10:25:33.798341  373038 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1101 10:25:36.869247  373038 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I1101 10:25:42.949278  373038 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: no route to host
	I1101 10:25:45.950024  373038 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.18:22: connect: connection refused
	I1101 10:25:49.068936  373038 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:25:49.073095  373038 main.go:143] libmachine: domain test-preload-753124 has defined MAC address 52:54:00:41:14:81 in network mk-test-preload-753124
	I1101 10:25:49.073617  373038 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:14:81", ip: ""} in network mk-test-preload-753124: {Iface:virbr1 ExpiryTime:2025-11-01 11:25:45 +0000 UTC Type:0 Mac:52:54:00:41:14:81 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:test-preload-753124 Clientid:01:52:54:00:41:14:81}
	I1101 10:25:49.073666  373038 main.go:143] libmachine: domain test-preload-753124 has defined IP address 192.168.39.18 and MAC address 52:54:00:41:14:81 in network mk-test-preload-753124
	I1101 10:25:49.073956  373038 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/test-preload-753124/config.json ...
	I1101 10:25:49.074201  373038 machine.go:94] provisionDockerMachine start ...
	I1101 10:25:49.076931  373038 main.go:143] libmachine: domain test-preload-753124 has defined MAC address 52:54:00:41:14:81 in network mk-test-preload-753124
	I1101 10:25:49.077443  373038 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:14:81", ip: ""} in network mk-test-preload-753124: {Iface:virbr1 ExpiryTime:2025-11-01 11:25:45 +0000 UTC Type:0 Mac:52:54:00:41:14:81 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:test-preload-753124 Clientid:01:52:54:00:41:14:81}
	I1101 10:25:49.077474  373038 main.go:143] libmachine: domain test-preload-753124 has defined IP address 192.168.39.18 and MAC address 52:54:00:41:14:81 in network mk-test-preload-753124
	I1101 10:25:49.077709  373038 main.go:143] libmachine: Using SSH client type: native
	I1101 10:25:49.078103  373038 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I1101 10:25:49.078126  373038 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:25:49.194534  373038 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1101 10:25:49.194565  373038 buildroot.go:166] provisioning hostname "test-preload-753124"
	I1101 10:25:49.197601  373038 main.go:143] libmachine: domain test-preload-753124 has defined MAC address 52:54:00:41:14:81 in network mk-test-preload-753124
	I1101 10:25:49.198088  373038 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:14:81", ip: ""} in network mk-test-preload-753124: {Iface:virbr1 ExpiryTime:2025-11-01 11:25:45 +0000 UTC Type:0 Mac:52:54:00:41:14:81 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:test-preload-753124 Clientid:01:52:54:00:41:14:81}
	I1101 10:25:49.198115  373038 main.go:143] libmachine: domain test-preload-753124 has defined IP address 192.168.39.18 and MAC address 52:54:00:41:14:81 in network mk-test-preload-753124
	I1101 10:25:49.198311  373038 main.go:143] libmachine: Using SSH client type: native
	I1101 10:25:49.198549  373038 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I1101 10:25:49.198565  373038 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-753124 && echo "test-preload-753124" | sudo tee /etc/hostname
	I1101 10:25:49.334356  373038 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-753124
	
	I1101 10:25:49.337231  373038 main.go:143] libmachine: domain test-preload-753124 has defined MAC address 52:54:00:41:14:81 in network mk-test-preload-753124
	I1101 10:25:49.337588  373038 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:14:81", ip: ""} in network mk-test-preload-753124: {Iface:virbr1 ExpiryTime:2025-11-01 11:25:45 +0000 UTC Type:0 Mac:52:54:00:41:14:81 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:test-preload-753124 Clientid:01:52:54:00:41:14:81}
	I1101 10:25:49.337608  373038 main.go:143] libmachine: domain test-preload-753124 has defined IP address 192.168.39.18 and MAC address 52:54:00:41:14:81 in network mk-test-preload-753124
	I1101 10:25:49.337814  373038 main.go:143] libmachine: Using SSH client type: native
	I1101 10:25:49.338026  373038 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I1101 10:25:49.338041  373038 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-753124' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-753124/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-753124' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:25:49.468129  373038 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:25:49.468156  373038 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21832-344560/.minikube CaCertPath:/home/jenkins/minikube-integration/21832-344560/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21832-344560/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21832-344560/.minikube}
	I1101 10:25:49.468175  373038 buildroot.go:174] setting up certificates
	I1101 10:25:49.468185  373038 provision.go:84] configureAuth start
	I1101 10:25:49.471357  373038 main.go:143] libmachine: domain test-preload-753124 has defined MAC address 52:54:00:41:14:81 in network mk-test-preload-753124
	I1101 10:25:49.471758  373038 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:14:81", ip: ""} in network mk-test-preload-753124: {Iface:virbr1 ExpiryTime:2025-11-01 11:25:45 +0000 UTC Type:0 Mac:52:54:00:41:14:81 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:test-preload-753124 Clientid:01:52:54:00:41:14:81}
	I1101 10:25:49.471778  373038 main.go:143] libmachine: domain test-preload-753124 has defined IP address 192.168.39.18 and MAC address 52:54:00:41:14:81 in network mk-test-preload-753124
	I1101 10:25:49.474434  373038 main.go:143] libmachine: domain test-preload-753124 has defined MAC address 52:54:00:41:14:81 in network mk-test-preload-753124
	I1101 10:25:49.474881  373038 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:14:81", ip: ""} in network mk-test-preload-753124: {Iface:virbr1 ExpiryTime:2025-11-01 11:25:45 +0000 UTC Type:0 Mac:52:54:00:41:14:81 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:test-preload-753124 Clientid:01:52:54:00:41:14:81}
	I1101 10:25:49.474906  373038 main.go:143] libmachine: domain test-preload-753124 has defined IP address 192.168.39.18 and MAC address 52:54:00:41:14:81 in network mk-test-preload-753124
	I1101 10:25:49.475032  373038 provision.go:143] copyHostCerts
	I1101 10:25:49.475087  373038 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-344560/.minikube/ca.pem, removing ...
	I1101 10:25:49.475097  373038 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-344560/.minikube/ca.pem
	I1101 10:25:49.475167  373038 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21832-344560/.minikube/ca.pem (1082 bytes)
	I1101 10:25:49.475272  373038 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-344560/.minikube/cert.pem, removing ...
	I1101 10:25:49.475281  373038 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-344560/.minikube/cert.pem
	I1101 10:25:49.475308  373038 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21832-344560/.minikube/cert.pem (1123 bytes)
	I1101 10:25:49.475378  373038 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-344560/.minikube/key.pem, removing ...
	I1101 10:25:49.475386  373038 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-344560/.minikube/key.pem
	I1101 10:25:49.475412  373038 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21832-344560/.minikube/key.pem (1679 bytes)
	I1101 10:25:49.475530  373038 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21832-344560/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21832-344560/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21832-344560/.minikube/certs/ca-key.pem org=jenkins.test-preload-753124 san=[127.0.0.1 192.168.39.18 localhost minikube test-preload-753124]
	I1101 10:25:49.760325  373038 provision.go:177] copyRemoteCerts
	I1101 10:25:49.760390  373038 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:25:49.762890  373038 main.go:143] libmachine: domain test-preload-753124 has defined MAC address 52:54:00:41:14:81 in network mk-test-preload-753124
	I1101 10:25:49.763277  373038 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:14:81", ip: ""} in network mk-test-preload-753124: {Iface:virbr1 ExpiryTime:2025-11-01 11:25:45 +0000 UTC Type:0 Mac:52:54:00:41:14:81 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:test-preload-753124 Clientid:01:52:54:00:41:14:81}
	I1101 10:25:49.763301  373038 main.go:143] libmachine: domain test-preload-753124 has defined IP address 192.168.39.18 and MAC address 52:54:00:41:14:81 in network mk-test-preload-753124
	I1101 10:25:49.763449  373038 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/test-preload-753124/id_rsa Username:docker}
	I1101 10:25:49.854827  373038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 10:25:49.887637  373038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1101 10:25:49.922714  373038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 10:25:49.956942  373038 provision.go:87] duration metric: took 488.734247ms to configureAuth
	I1101 10:25:49.956983  373038 buildroot.go:189] setting minikube options for container-runtime
	I1101 10:25:49.957206  373038 config.go:182] Loaded profile config "test-preload-753124": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1101 10:25:49.960192  373038 main.go:143] libmachine: domain test-preload-753124 has defined MAC address 52:54:00:41:14:81 in network mk-test-preload-753124
	I1101 10:25:49.960551  373038 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:14:81", ip: ""} in network mk-test-preload-753124: {Iface:virbr1 ExpiryTime:2025-11-01 11:25:45 +0000 UTC Type:0 Mac:52:54:00:41:14:81 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:test-preload-753124 Clientid:01:52:54:00:41:14:81}
	I1101 10:25:49.960579  373038 main.go:143] libmachine: domain test-preload-753124 has defined IP address 192.168.39.18 and MAC address 52:54:00:41:14:81 in network mk-test-preload-753124
	I1101 10:25:49.960746  373038 main.go:143] libmachine: Using SSH client type: native
	I1101 10:25:49.960976  373038 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I1101 10:25:49.960996  373038 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:25:50.234358  373038 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:25:50.234393  373038 machine.go:97] duration metric: took 1.160174871s to provisionDockerMachine
	I1101 10:25:50.234410  373038 start.go:293] postStartSetup for "test-preload-753124" (driver="kvm2")
	I1101 10:25:50.234424  373038 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:25:50.234484  373038 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:25:50.237315  373038 main.go:143] libmachine: domain test-preload-753124 has defined MAC address 52:54:00:41:14:81 in network mk-test-preload-753124
	I1101 10:25:50.237731  373038 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:14:81", ip: ""} in network mk-test-preload-753124: {Iface:virbr1 ExpiryTime:2025-11-01 11:25:45 +0000 UTC Type:0 Mac:52:54:00:41:14:81 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:test-preload-753124 Clientid:01:52:54:00:41:14:81}
	I1101 10:25:50.237752  373038 main.go:143] libmachine: domain test-preload-753124 has defined IP address 192.168.39.18 and MAC address 52:54:00:41:14:81 in network mk-test-preload-753124
	I1101 10:25:50.237891  373038 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/test-preload-753124/id_rsa Username:docker}
	I1101 10:25:50.330470  373038 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:25:50.336284  373038 info.go:137] Remote host: Buildroot 2025.02
	I1101 10:25:50.336317  373038 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-344560/.minikube/addons for local assets ...
	I1101 10:25:50.336405  373038 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-344560/.minikube/files for local assets ...
	I1101 10:25:50.336483  373038 filesync.go:149] local asset: /home/jenkins/minikube-integration/21832-344560/.minikube/files/etc/ssl/certs/3485182.pem -> 3485182.pem in /etc/ssl/certs
	I1101 10:25:50.336573  373038 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:25:50.350750  373038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/files/etc/ssl/certs/3485182.pem --> /etc/ssl/certs/3485182.pem (1708 bytes)
	I1101 10:25:50.382904  373038 start.go:296] duration metric: took 148.47434ms for postStartSetup
	I1101 10:25:50.382963  373038 fix.go:56] duration metric: took 17.991656812s for fixHost
	I1101 10:25:50.385509  373038 main.go:143] libmachine: domain test-preload-753124 has defined MAC address 52:54:00:41:14:81 in network mk-test-preload-753124
	I1101 10:25:50.385856  373038 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:14:81", ip: ""} in network mk-test-preload-753124: {Iface:virbr1 ExpiryTime:2025-11-01 11:25:45 +0000 UTC Type:0 Mac:52:54:00:41:14:81 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:test-preload-753124 Clientid:01:52:54:00:41:14:81}
	I1101 10:25:50.385889  373038 main.go:143] libmachine: domain test-preload-753124 has defined IP address 192.168.39.18 and MAC address 52:54:00:41:14:81 in network mk-test-preload-753124
	I1101 10:25:50.386032  373038 main.go:143] libmachine: Using SSH client type: native
	I1101 10:25:50.386223  373038 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.18 22 <nil> <nil>}
	I1101 10:25:50.386234  373038 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1101 10:25:50.502995  373038 main.go:143] libmachine: SSH cmd err, output: <nil>: 1761992750.464755451
	
	I1101 10:25:50.503017  373038 fix.go:216] guest clock: 1761992750.464755451
	I1101 10:25:50.503025  373038 fix.go:229] Guest: 2025-11-01 10:25:50.464755451 +0000 UTC Remote: 2025-11-01 10:25:50.382967328 +0000 UTC m=+20.496557237 (delta=81.788123ms)
	I1101 10:25:50.503043  373038 fix.go:200] guest clock delta is within tolerance: 81.788123ms
	I1101 10:25:50.503051  373038 start.go:83] releasing machines lock for "test-preload-753124", held for 18.111765801s
	I1101 10:25:50.505816  373038 main.go:143] libmachine: domain test-preload-753124 has defined MAC address 52:54:00:41:14:81 in network mk-test-preload-753124
	I1101 10:25:50.506269  373038 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:14:81", ip: ""} in network mk-test-preload-753124: {Iface:virbr1 ExpiryTime:2025-11-01 11:25:45 +0000 UTC Type:0 Mac:52:54:00:41:14:81 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:test-preload-753124 Clientid:01:52:54:00:41:14:81}
	I1101 10:25:50.506302  373038 main.go:143] libmachine: domain test-preload-753124 has defined IP address 192.168.39.18 and MAC address 52:54:00:41:14:81 in network mk-test-preload-753124
	I1101 10:25:50.506914  373038 ssh_runner.go:195] Run: cat /version.json
	I1101 10:25:50.506925  373038 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:25:50.509960  373038 main.go:143] libmachine: domain test-preload-753124 has defined MAC address 52:54:00:41:14:81 in network mk-test-preload-753124
	I1101 10:25:50.510269  373038 main.go:143] libmachine: domain test-preload-753124 has defined MAC address 52:54:00:41:14:81 in network mk-test-preload-753124
	I1101 10:25:50.510349  373038 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:14:81", ip: ""} in network mk-test-preload-753124: {Iface:virbr1 ExpiryTime:2025-11-01 11:25:45 +0000 UTC Type:0 Mac:52:54:00:41:14:81 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:test-preload-753124 Clientid:01:52:54:00:41:14:81}
	I1101 10:25:50.510378  373038 main.go:143] libmachine: domain test-preload-753124 has defined IP address 192.168.39.18 and MAC address 52:54:00:41:14:81 in network mk-test-preload-753124
	I1101 10:25:50.510521  373038 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/test-preload-753124/id_rsa Username:docker}
	I1101 10:25:50.510838  373038 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:14:81", ip: ""} in network mk-test-preload-753124: {Iface:virbr1 ExpiryTime:2025-11-01 11:25:45 +0000 UTC Type:0 Mac:52:54:00:41:14:81 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:test-preload-753124 Clientid:01:52:54:00:41:14:81}
	I1101 10:25:50.510891  373038 main.go:143] libmachine: domain test-preload-753124 has defined IP address 192.168.39.18 and MAC address 52:54:00:41:14:81 in network mk-test-preload-753124
	I1101 10:25:50.511112  373038 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/test-preload-753124/id_rsa Username:docker}
	I1101 10:25:50.598067  373038 ssh_runner.go:195] Run: systemctl --version
	I1101 10:25:50.624104  373038 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:25:50.771344  373038 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:25:50.779007  373038 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:25:50.779098  373038 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:25:50.801637  373038 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 10:25:50.801674  373038 start.go:496] detecting cgroup driver to use...
	I1101 10:25:50.801743  373038 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:25:50.822666  373038 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:25:50.841391  373038 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:25:50.841479  373038 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:25:50.861450  373038 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:25:50.880311  373038 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:25:51.032467  373038 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:25:51.249986  373038 docker.go:234] disabling docker service ...
	I1101 10:25:51.250075  373038 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:25:51.267681  373038 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:25:51.284027  373038 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:25:51.447808  373038 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:25:51.596557  373038 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:25:51.613240  373038 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:25:51.638390  373038 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1101 10:25:51.638477  373038 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:25:51.652137  373038 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:25:51.652224  373038 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:25:51.666162  373038 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:25:51.679984  373038 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:25:51.693547  373038 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:25:51.708124  373038 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:25:51.722163  373038 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:25:51.745959  373038 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:25:51.759570  373038 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:25:51.771303  373038 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 10:25:51.771375  373038 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 10:25:51.793572  373038 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:25:51.806076  373038 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:25:51.955537  373038 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:25:52.071818  373038 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:25:52.071929  373038 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:25:52.078262  373038 start.go:564] Will wait 60s for crictl version
	I1101 10:25:52.078333  373038 ssh_runner.go:195] Run: which crictl
	I1101 10:25:52.083462  373038 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 10:25:52.130231  373038 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1101 10:25:52.130338  373038 ssh_runner.go:195] Run: crio --version
	I1101 10:25:52.168232  373038 ssh_runner.go:195] Run: crio --version
	I1101 10:25:52.206012  373038 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1101 10:25:52.210219  373038 main.go:143] libmachine: domain test-preload-753124 has defined MAC address 52:54:00:41:14:81 in network mk-test-preload-753124
	I1101 10:25:52.210640  373038 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:14:81", ip: ""} in network mk-test-preload-753124: {Iface:virbr1 ExpiryTime:2025-11-01 11:25:45 +0000 UTC Type:0 Mac:52:54:00:41:14:81 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:test-preload-753124 Clientid:01:52:54:00:41:14:81}
	I1101 10:25:52.210668  373038 main.go:143] libmachine: domain test-preload-753124 has defined IP address 192.168.39.18 and MAC address 52:54:00:41:14:81 in network mk-test-preload-753124
	I1101 10:25:52.210928  373038 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1101 10:25:52.216270  373038 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:25:52.233169  373038 kubeadm.go:884] updating cluster {Name:test-preload-753124 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-753124 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:25:52.233316  373038 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1101 10:25:52.233381  373038 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:25:52.283848  373038 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1101 10:25:52.283950  373038 ssh_runner.go:195] Run: which lz4
	I1101 10:25:52.289396  373038 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1101 10:25:52.295442  373038 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 10:25:52.295486  373038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1101 10:25:53.976052  373038 crio.go:462] duration metric: took 1.686703611s to copy over tarball
	I1101 10:25:53.976154  373038 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 10:25:55.674098  373038 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.697913912s)
	I1101 10:25:55.674130  373038 crio.go:469] duration metric: took 1.69804031s to extract the tarball
	I1101 10:25:55.674140  373038 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 10:25:55.717882  373038 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:25:55.764297  373038 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:25:55.764323  373038 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:25:55.764331  373038 kubeadm.go:935] updating node { 192.168.39.18 8443 v1.32.0 crio true true} ...
	I1101 10:25:55.764440  373038 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-753124 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.18
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-753124 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:25:55.764511  373038 ssh_runner.go:195] Run: crio config
	I1101 10:25:55.819646  373038 cni.go:84] Creating CNI manager for ""
	I1101 10:25:55.819675  373038 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 10:25:55.819704  373038 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:25:55.819742  373038 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.18 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-753124 NodeName:test-preload-753124 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.18"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.18 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:25:55.819885  373038 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.18
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-753124"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.18"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.18"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:25:55.819972  373038 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1101 10:25:55.833263  373038 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:25:55.833350  373038 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:25:55.846406  373038 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I1101 10:25:55.869133  373038 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:25:55.891420  373038 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1101 10:25:55.915125  373038 ssh_runner.go:195] Run: grep 192.168.39.18	control-plane.minikube.internal$ /etc/hosts
	I1101 10:25:55.919959  373038 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.18	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:25:55.935958  373038 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:25:56.083585  373038 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:25:56.104807  373038 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/test-preload-753124 for IP: 192.168.39.18
	I1101 10:25:56.104835  373038 certs.go:195] generating shared ca certs ...
	I1101 10:25:56.104854  373038 certs.go:227] acquiring lock for ca certs: {Name:mkba0fe79f6b0ed99353299aaf34c6fbc547c6f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:25:56.105063  373038 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-344560/.minikube/ca.key
	I1101 10:25:56.105130  373038 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-344560/.minikube/proxy-client-ca.key
	I1101 10:25:56.105143  373038 certs.go:257] generating profile certs ...
	I1101 10:25:56.105228  373038 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/test-preload-753124/client.key
	I1101 10:25:56.105284  373038 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/test-preload-753124/apiserver.key.81d674d3
	I1101 10:25:56.105329  373038 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/test-preload-753124/proxy-client.key
	I1101 10:25:56.105436  373038 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/348518.pem (1338 bytes)
	W1101 10:25:56.105478  373038 certs.go:480] ignoring /home/jenkins/minikube-integration/21832-344560/.minikube/certs/348518_empty.pem, impossibly tiny 0 bytes
	I1101 10:25:56.105487  373038 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:25:56.105511  373038 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/ca.pem (1082 bytes)
	I1101 10:25:56.105531  373038 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:25:56.105552  373038 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/key.pem (1679 bytes)
	I1101 10:25:56.105588  373038 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-344560/.minikube/files/etc/ssl/certs/3485182.pem (1708 bytes)
	I1101 10:25:56.106183  373038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:25:56.149088  373038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 10:25:56.192551  373038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:25:56.225966  373038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 10:25:56.258958  373038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/test-preload-753124/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1101 10:25:56.291853  373038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/test-preload-753124/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 10:25:56.324374  373038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/test-preload-753124/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:25:56.356987  373038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/test-preload-753124/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:25:56.389126  373038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/certs/348518.pem --> /usr/share/ca-certificates/348518.pem (1338 bytes)
	I1101 10:25:56.421167  373038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/files/etc/ssl/certs/3485182.pem --> /usr/share/ca-certificates/3485182.pem (1708 bytes)
	I1101 10:25:56.453548  373038 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:25:56.485252  373038 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:25:56.508377  373038 ssh_runner.go:195] Run: openssl version
	I1101 10:25:56.515906  373038 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/348518.pem && ln -fs /usr/share/ca-certificates/348518.pem /etc/ssl/certs/348518.pem"
	I1101 10:25:56.530467  373038 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/348518.pem
	I1101 10:25:56.536816  373038 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:34 /usr/share/ca-certificates/348518.pem
	I1101 10:25:56.536888  373038 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/348518.pem
	I1101 10:25:56.545422  373038 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/348518.pem /etc/ssl/certs/51391683.0"
	I1101 10:25:56.561328  373038 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3485182.pem && ln -fs /usr/share/ca-certificates/3485182.pem /etc/ssl/certs/3485182.pem"
	I1101 10:25:56.577002  373038 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3485182.pem
	I1101 10:25:56.583638  373038 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:34 /usr/share/ca-certificates/3485182.pem
	I1101 10:25:56.583705  373038 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3485182.pem
	I1101 10:25:56.592452  373038 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3485182.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:25:56.607043  373038 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:25:56.621834  373038 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:25:56.628170  373038 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:27 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:25:56.628240  373038 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:25:56.636743  373038 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:25:56.652260  373038 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:25:56.658696  373038 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:25:56.667183  373038 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:25:56.675376  373038 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:25:56.683938  373038 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:25:56.692199  373038 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:25:56.700672  373038 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:25:56.709056  373038 kubeadm.go:401] StartCluster: {Name:test-preload-753124 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-753124 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:25:56.709143  373038 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:25:56.709235  373038 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:25:56.753010  373038 cri.go:89] found id: ""
	I1101 10:25:56.753105  373038 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:25:56.766476  373038 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 10:25:56.766511  373038 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 10:25:56.766570  373038 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 10:25:56.779737  373038 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:25:56.780205  373038 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-753124" does not appear in /home/jenkins/minikube-integration/21832-344560/kubeconfig
	I1101 10:25:56.780302  373038 kubeconfig.go:62] /home/jenkins/minikube-integration/21832-344560/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-753124" cluster setting kubeconfig missing "test-preload-753124" context setting]
	I1101 10:25:56.780667  373038 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-344560/kubeconfig: {Name:mkaf75364e29c8ee4b260af678d355333969cf4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:25:56.781236  373038 kapi.go:59] client config for test-preload-753124: &rest.Config{Host:"https://192.168.39.18:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21832-344560/.minikube/profiles/test-preload-753124/client.crt", KeyFile:"/home/jenkins/minikube-integration/21832-344560/.minikube/profiles/test-preload-753124/client.key", CAFile:"/home/jenkins/minikube-integration/21832-344560/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 10:25:56.781732  373038 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1101 10:25:56.781749  373038 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1101 10:25:56.781753  373038 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1101 10:25:56.781757  373038 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1101 10:25:56.781762  373038 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1101 10:25:56.782137  373038 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 10:25:56.798075  373038 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.18
	I1101 10:25:56.798121  373038 kubeadm.go:1161] stopping kube-system containers ...
	I1101 10:25:56.798140  373038 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1101 10:25:56.798207  373038 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:25:56.849150  373038 cri.go:89] found id: ""
	I1101 10:25:56.849234  373038 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 10:25:56.876829  373038 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 10:25:56.890314  373038 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 10:25:56.890344  373038 kubeadm.go:158] found existing configuration files:
	
	I1101 10:25:56.890405  373038 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 10:25:56.902762  373038 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 10:25:56.902839  373038 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 10:25:56.917186  373038 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 10:25:56.929740  373038 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 10:25:56.929807  373038 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 10:25:56.942945  373038 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 10:25:56.955418  373038 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 10:25:56.955481  373038 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 10:25:56.968708  373038 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 10:25:56.981029  373038 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 10:25:56.981096  373038 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 10:25:56.994638  373038 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 10:25:57.007941  373038 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 10:25:57.072457  373038 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 10:25:57.896984  373038 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 10:25:58.173842  373038 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 10:25:58.257657  373038 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 10:25:58.362936  373038 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:25:58.363044  373038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:25:58.864100  373038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:25:59.363927  373038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:25:59.863587  373038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:25:59.893707  373038 api_server.go:72] duration metric: took 1.530787944s to wait for apiserver process to appear ...
	I1101 10:25:59.893742  373038 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:25:59.893786  373038 api_server.go:253] Checking apiserver healthz at https://192.168.39.18:8443/healthz ...
	I1101 10:25:59.894426  373038 api_server.go:269] stopped: https://192.168.39.18:8443/healthz: Get "https://192.168.39.18:8443/healthz": dial tcp 192.168.39.18:8443: connect: connection refused
	I1101 10:26:00.394153  373038 api_server.go:253] Checking apiserver healthz at https://192.168.39.18:8443/healthz ...
	I1101 10:26:02.304608  373038 api_server.go:279] https://192.168.39.18:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 10:26:02.304643  373038 api_server.go:103] status: https://192.168.39.18:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 10:26:02.304660  373038 api_server.go:253] Checking apiserver healthz at https://192.168.39.18:8443/healthz ...
	I1101 10:26:02.352130  373038 api_server.go:279] https://192.168.39.18:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 10:26:02.352164  373038 api_server.go:103] status: https://192.168.39.18:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 10:26:02.394510  373038 api_server.go:253] Checking apiserver healthz at https://192.168.39.18:8443/healthz ...
	I1101 10:26:02.460275  373038 api_server.go:279] https://192.168.39.18:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:26:02.460317  373038 api_server.go:103] status: https://192.168.39.18:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:26:02.893957  373038 api_server.go:253] Checking apiserver healthz at https://192.168.39.18:8443/healthz ...
	I1101 10:26:02.899307  373038 api_server.go:279] https://192.168.39.18:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:26:02.899348  373038 api_server.go:103] status: https://192.168.39.18:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:26:03.393993  373038 api_server.go:253] Checking apiserver healthz at https://192.168.39.18:8443/healthz ...
	I1101 10:26:03.404736  373038 api_server.go:279] https://192.168.39.18:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 10:26:03.404777  373038 api_server.go:103] status: https://192.168.39.18:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 10:26:03.894487  373038 api_server.go:253] Checking apiserver healthz at https://192.168.39.18:8443/healthz ...
	I1101 10:26:03.902442  373038 api_server.go:279] https://192.168.39.18:8443/healthz returned 200:
	ok
	I1101 10:26:03.912829  373038 api_server.go:141] control plane version: v1.32.0
	I1101 10:26:03.912886  373038 api_server.go:131] duration metric: took 4.019113068s to wait for apiserver health ...
	I1101 10:26:03.912901  373038 cni.go:84] Creating CNI manager for ""
	I1101 10:26:03.912912  373038 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 10:26:03.914799  373038 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 10:26:03.916273  373038 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 10:26:03.940078  373038 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1101 10:26:03.983273  373038 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:26:03.992473  373038 system_pods.go:59] 7 kube-system pods found
	I1101 10:26:03.992531  373038 system_pods.go:61] "coredns-668d6bf9bc-bl94g" [2b5d8404-3426-4ae8-b0b1-c3ddcf975621] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:26:03.992545  373038 system_pods.go:61] "etcd-test-preload-753124" [b0f88a3b-72e9-4bbd-aa1c-efd2e851c5fa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:26:03.992557  373038 system_pods.go:61] "kube-apiserver-test-preload-753124" [fdd6a774-db6a-4222-8e1e-ac213cf03d6c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:26:03.992570  373038 system_pods.go:61] "kube-controller-manager-test-preload-753124" [945b23f5-d3ac-4a5e-b241-4600561960cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:26:03.992582  373038 system_pods.go:61] "kube-proxy-vdxtw" [4f78c451-f205-4bb7-96cf-d5872ba7cacc] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 10:26:03.992592  373038 system_pods.go:61] "kube-scheduler-test-preload-753124" [043ae668-4983-4b73-bc81-0941c4ccf7e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:26:03.992609  373038 system_pods.go:61] "storage-provisioner" [e8c07910-5a5a-4e07-877f-ff6d558c8b02] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:26:03.992637  373038 system_pods.go:74] duration metric: took 9.319765ms to wait for pod list to return data ...
	I1101 10:26:03.992654  373038 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:26:03.999611  373038 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 10:26:03.999653  373038 node_conditions.go:123] node cpu capacity is 2
	I1101 10:26:03.999671  373038 node_conditions.go:105] duration metric: took 7.012017ms to run NodePressure ...
	I1101 10:26:03.999754  373038 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 10:26:04.287905  373038 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1101 10:26:04.292261  373038 kubeadm.go:744] kubelet initialised
	I1101 10:26:04.292291  373038 kubeadm.go:745] duration metric: took 4.355851ms waiting for restarted kubelet to initialise ...
	I1101 10:26:04.292315  373038 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 10:26:04.310491  373038 ops.go:34] apiserver oom_adj: -16
	I1101 10:26:04.310523  373038 kubeadm.go:602] duration metric: took 7.54400331s to restartPrimaryControlPlane
	I1101 10:26:04.310538  373038 kubeadm.go:403] duration metric: took 7.60149552s to StartCluster
	I1101 10:26:04.310566  373038 settings.go:142] acquiring lock: {Name:mk0cdfdd584044c1d93f88e46e35ef3af10fed81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:26:04.310677  373038 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21832-344560/kubeconfig
	I1101 10:26:04.311554  373038 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-344560/kubeconfig: {Name:mkaf75364e29c8ee4b260af678d355333969cf4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:26:04.311934  373038 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.18 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:26:04.311973  373038 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:26:04.312079  373038 addons.go:70] Setting storage-provisioner=true in profile "test-preload-753124"
	I1101 10:26:04.312106  373038 addons.go:239] Setting addon storage-provisioner=true in "test-preload-753124"
	W1101 10:26:04.312114  373038 addons.go:248] addon storage-provisioner should already be in state true
	I1101 10:26:04.312114  373038 addons.go:70] Setting default-storageclass=true in profile "test-preload-753124"
	I1101 10:26:04.312145  373038 host.go:66] Checking if "test-preload-753124" exists ...
	I1101 10:26:04.312152  373038 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-753124"
	I1101 10:26:04.312162  373038 config.go:182] Loaded profile config "test-preload-753124": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1101 10:26:04.313473  373038 out.go:179] * Verifying Kubernetes components...
	I1101 10:26:04.314848  373038 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:26:04.314852  373038 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:26:04.315020  373038 kapi.go:59] client config for test-preload-753124: &rest.Config{Host:"https://192.168.39.18:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21832-344560/.minikube/profiles/test-preload-753124/client.crt", KeyFile:"/home/jenkins/minikube-integration/21832-344560/.minikube/profiles/test-preload-753124/client.key", CAFile:"/home/jenkins/minikube-integration/21832-344560/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 10:26:04.315377  373038 addons.go:239] Setting addon default-storageclass=true in "test-preload-753124"
	W1101 10:26:04.315394  373038 addons.go:248] addon default-storageclass should already be in state true
	I1101 10:26:04.315421  373038 host.go:66] Checking if "test-preload-753124" exists ...
	I1101 10:26:04.316024  373038 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:26:04.316045  373038 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:26:04.317302  373038 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:26:04.317319  373038 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:26:04.319165  373038 main.go:143] libmachine: domain test-preload-753124 has defined MAC address 52:54:00:41:14:81 in network mk-test-preload-753124
	I1101 10:26:04.319697  373038 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:14:81", ip: ""} in network mk-test-preload-753124: {Iface:virbr1 ExpiryTime:2025-11-01 11:25:45 +0000 UTC Type:0 Mac:52:54:00:41:14:81 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:test-preload-753124 Clientid:01:52:54:00:41:14:81}
	I1101 10:26:04.319742  373038 main.go:143] libmachine: domain test-preload-753124 has defined IP address 192.168.39.18 and MAC address 52:54:00:41:14:81 in network mk-test-preload-753124
	I1101 10:26:04.319947  373038 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/test-preload-753124/id_rsa Username:docker}
	I1101 10:26:04.320162  373038 main.go:143] libmachine: domain test-preload-753124 has defined MAC address 52:54:00:41:14:81 in network mk-test-preload-753124
	I1101 10:26:04.320620  373038 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:41:14:81", ip: ""} in network mk-test-preload-753124: {Iface:virbr1 ExpiryTime:2025-11-01 11:25:45 +0000 UTC Type:0 Mac:52:54:00:41:14:81 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:test-preload-753124 Clientid:01:52:54:00:41:14:81}
	I1101 10:26:04.320649  373038 main.go:143] libmachine: domain test-preload-753124 has defined IP address 192.168.39.18 and MAC address 52:54:00:41:14:81 in network mk-test-preload-753124
	I1101 10:26:04.320846  373038 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/test-preload-753124/id_rsa Username:docker}
	I1101 10:26:04.627158  373038 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:26:04.665823  373038 node_ready.go:35] waiting up to 6m0s for node "test-preload-753124" to be "Ready" ...
	I1101 10:26:04.672371  373038 node_ready.go:49] node "test-preload-753124" is "Ready"
	I1101 10:26:04.672410  373038 node_ready.go:38] duration metric: took 6.543412ms for node "test-preload-753124" to be "Ready" ...
	I1101 10:26:04.672427  373038 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:26:04.672478  373038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:26:04.728802  373038 api_server.go:72] duration metric: took 416.820931ms to wait for apiserver process to appear ...
	I1101 10:26:04.728828  373038 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:26:04.728845  373038 api_server.go:253] Checking apiserver healthz at https://192.168.39.18:8443/healthz ...
	I1101 10:26:04.737816  373038 api_server.go:279] https://192.168.39.18:8443/healthz returned 200:
	ok
	I1101 10:26:04.738689  373038 api_server.go:141] control plane version: v1.32.0
	I1101 10:26:04.738718  373038 api_server.go:131] duration metric: took 9.878449ms to wait for apiserver health ...
	I1101 10:26:04.738727  373038 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:26:04.744733  373038 system_pods.go:59] 7 kube-system pods found
	I1101 10:26:04.744773  373038 system_pods.go:61] "coredns-668d6bf9bc-bl94g" [2b5d8404-3426-4ae8-b0b1-c3ddcf975621] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:26:04.744780  373038 system_pods.go:61] "etcd-test-preload-753124" [b0f88a3b-72e9-4bbd-aa1c-efd2e851c5fa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:26:04.744788  373038 system_pods.go:61] "kube-apiserver-test-preload-753124" [fdd6a774-db6a-4222-8e1e-ac213cf03d6c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:26:04.744793  373038 system_pods.go:61] "kube-controller-manager-test-preload-753124" [945b23f5-d3ac-4a5e-b241-4600561960cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:26:04.744797  373038 system_pods.go:61] "kube-proxy-vdxtw" [4f78c451-f205-4bb7-96cf-d5872ba7cacc] Running
	I1101 10:26:04.744805  373038 system_pods.go:61] "kube-scheduler-test-preload-753124" [043ae668-4983-4b73-bc81-0941c4ccf7e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:26:04.744809  373038 system_pods.go:61] "storage-provisioner" [e8c07910-5a5a-4e07-877f-ff6d558c8b02] Running
	I1101 10:26:04.744816  373038 system_pods.go:74] duration metric: took 6.082897ms to wait for pod list to return data ...
	I1101 10:26:04.744824  373038 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:26:04.749728  373038 default_sa.go:45] found service account: "default"
	I1101 10:26:04.749754  373038 default_sa.go:55] duration metric: took 4.924195ms for default service account to be created ...
	I1101 10:26:04.749764  373038 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:26:04.753248  373038 system_pods.go:86] 7 kube-system pods found
	I1101 10:26:04.753288  373038 system_pods.go:89] "coredns-668d6bf9bc-bl94g" [2b5d8404-3426-4ae8-b0b1-c3ddcf975621] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:26:04.753299  373038 system_pods.go:89] "etcd-test-preload-753124" [b0f88a3b-72e9-4bbd-aa1c-efd2e851c5fa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 10:26:04.753312  373038 system_pods.go:89] "kube-apiserver-test-preload-753124" [fdd6a774-db6a-4222-8e1e-ac213cf03d6c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 10:26:04.753320  373038 system_pods.go:89] "kube-controller-manager-test-preload-753124" [945b23f5-d3ac-4a5e-b241-4600561960cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 10:26:04.753327  373038 system_pods.go:89] "kube-proxy-vdxtw" [4f78c451-f205-4bb7-96cf-d5872ba7cacc] Running
	I1101 10:26:04.753335  373038 system_pods.go:89] "kube-scheduler-test-preload-753124" [043ae668-4983-4b73-bc81-0941c4ccf7e0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 10:26:04.753341  373038 system_pods.go:89] "storage-provisioner" [e8c07910-5a5a-4e07-877f-ff6d558c8b02] Running
	I1101 10:26:04.753351  373038 system_pods.go:126] duration metric: took 3.580541ms to wait for k8s-apps to be running ...
	I1101 10:26:04.753361  373038 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 10:26:04.753415  373038 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:26:04.760656  373038 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:26:04.786940  373038 system_svc.go:56] duration metric: took 33.56926ms WaitForService to wait for kubelet
	I1101 10:26:04.786975  373038 kubeadm.go:587] duration metric: took 475.002057ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:26:04.787015  373038 node_conditions.go:102] verifying NodePressure condition ...
	I1101 10:26:04.792347  373038 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 10:26:04.792376  373038 node_conditions.go:123] node cpu capacity is 2
	I1101 10:26:04.792388  373038 node_conditions.go:105] duration metric: took 5.368429ms to run NodePressure ...
	I1101 10:26:04.792400  373038 start.go:242] waiting for startup goroutines ...
	I1101 10:26:04.803836  373038 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:26:05.915452  373038 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.154758274s)
	I1101 10:26:05.915562  373038 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.111685884s)
	I1101 10:26:05.951995  373038 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 10:26:05.953310  373038 addons.go:515] duration metric: took 1.641335633s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 10:26:05.953358  373038 start.go:247] waiting for cluster config update ...
	I1101 10:26:05.953399  373038 start.go:256] writing updated cluster config ...
	I1101 10:26:05.953667  373038 ssh_runner.go:195] Run: rm -f paused
	I1101 10:26:05.962846  373038 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:26:05.963461  373038 kapi.go:59] client config for test-preload-753124: &rest.Config{Host:"https://192.168.39.18:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21832-344560/.minikube/profiles/test-preload-753124/client.crt", KeyFile:"/home/jenkins/minikube-integration/21832-344560/.minikube/profiles/test-preload-753124/client.key", CAFile:"/home/jenkins/minikube-integration/21832-344560/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 10:26:05.970278  373038 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-bl94g" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:26:06.976001  373038 pod_ready.go:94] pod "coredns-668d6bf9bc-bl94g" is "Ready"
	I1101 10:26:06.976031  373038 pod_ready.go:86] duration metric: took 1.005726593s for pod "coredns-668d6bf9bc-bl94g" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:26:06.978803  373038 pod_ready.go:83] waiting for pod "etcd-test-preload-753124" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 10:26:08.985280  373038 pod_ready.go:104] pod "etcd-test-preload-753124" is not "Ready", error: <nil>
	W1101 10:26:10.987227  373038 pod_ready.go:104] pod "etcd-test-preload-753124" is not "Ready", error: <nil>
	I1101 10:26:11.986974  373038 pod_ready.go:94] pod "etcd-test-preload-753124" is "Ready"
	I1101 10:26:11.987007  373038 pod_ready.go:86] duration metric: took 5.008181924s for pod "etcd-test-preload-753124" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:26:11.991917  373038 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-753124" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:26:11.997560  373038 pod_ready.go:94] pod "kube-apiserver-test-preload-753124" is "Ready"
	I1101 10:26:11.997588  373038 pod_ready.go:86] duration metric: took 5.623816ms for pod "kube-apiserver-test-preload-753124" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:26:12.001019  373038 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-753124" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 10:26:14.008378  373038 pod_ready.go:104] pod "kube-controller-manager-test-preload-753124" is not "Ready", error: <nil>
	W1101 10:26:16.008447  373038 pod_ready.go:104] pod "kube-controller-manager-test-preload-753124" is not "Ready", error: <nil>
	I1101 10:26:16.508001  373038 pod_ready.go:94] pod "kube-controller-manager-test-preload-753124" is "Ready"
	I1101 10:26:16.508033  373038 pod_ready.go:86] duration metric: took 4.506983278s for pod "kube-controller-manager-test-preload-753124" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:26:16.511106  373038 pod_ready.go:83] waiting for pod "kube-proxy-vdxtw" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:26:16.516319  373038 pod_ready.go:94] pod "kube-proxy-vdxtw" is "Ready"
	I1101 10:26:16.516346  373038 pod_ready.go:86] duration metric: took 5.20023ms for pod "kube-proxy-vdxtw" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:26:16.519336  373038 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-753124" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:26:16.524534  373038 pod_ready.go:94] pod "kube-scheduler-test-preload-753124" is "Ready"
	I1101 10:26:16.524561  373038 pod_ready.go:86] duration metric: took 5.198771ms for pod "kube-scheduler-test-preload-753124" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:26:16.524573  373038 pod_ready.go:40] duration metric: took 10.561681506s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:26:16.572264  373038 start.go:628] kubectl: 1.34.1, cluster: 1.32.0 (minor skew: 2)
	I1101 10:26:16.573747  373038 out.go:203] 
	W1101 10:26:16.575056  373038 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.32.0.
	I1101 10:26:16.576264  373038 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1101 10:26:16.577336  373038 out.go:179] * Done! kubectl is now configured to use "test-preload-753124" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 10:26:17 test-preload-753124 crio[839]: time="2025-11-01 10:26:17.476929079Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=afb8d9ca-bf17-4a8e-9e6b-9d75366cb757 name=/runtime.v1.RuntimeService/Version
	Nov 01 10:26:17 test-preload-753124 crio[839]: time="2025-11-01 10:26:17.478913138Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=06e0d022-dd4f-4c75-8c12-f9003d7d756f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:26:17 test-preload-753124 crio[839]: time="2025-11-01 10:26:17.480017388Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=9d070ff1-432d-4855-b9d3-66c6a8bcfa4f name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 01 10:26:17 test-preload-753124 crio[839]: time="2025-11-01 10:26:17.480209640Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:4a20385ab158108811ba34d2d9857a169f09ad5423b195a08a627707f48a0262,Metadata:&PodSandboxMetadata{Name:coredns-668d6bf9bc-bl94g,Uid:2b5d8404-3426-4ae8-b0b1-c3ddcf975621,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761992764931592881,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-668d6bf9bc-bl94g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b5d8404-3426-4ae8-b0b1-c3ddcf975621,k8s-app: kube-dns,pod-template-hash: 668d6bf9bc,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T10:26:03.265013258Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5cfc4ecf4098f627db144f83753fe3bdce7e890d19a226168f8b7fabb6c4ba4b,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:e8c07910-5a5a-4e07-877f-ff6d558c8b02,Namespace:kube-syste
m,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761992763592940929,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8c07910-5a5a-4e07-877f-ff6d558c8b02,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath
\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2025-11-01T10:26:03.265027241Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2d9d732a713d9afe56b11704a82b1809096549e41cabcadb185f9d47ce14f2ac,Metadata:&PodSandboxMetadata{Name:kube-proxy-vdxtw,Uid:4f78c451-f205-4bb7-96cf-d5872ba7cacc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761992763577036786,Labels:map[string]string{controller-revision-hash: 64b9dbc74b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-vdxtw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f78c451-f205-4bb7-96cf-d5872ba7cacc,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T10:26:03.265024883Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a872781df9b2622e60ab27f0cf9b93d8674296accc319b82bbd8a9a9c11424ce,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-753124,Uid:cafb7ab59e3528f34
9ce32ff42adfeab,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761992759198831183,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-753124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cafb7ab59e3528f349ce32ff42adfeab,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.18:2379,kubernetes.io/config.hash: cafb7ab59e3528f349ce32ff42adfeab,kubernetes.io/config.seen: 2025-11-01T10:25:58.344258544Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:881345ca41f8f001444a65537f4c32b8ee0130b48a23554a01181b7e25165363,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-753124,Uid:07b53a7244883ec106754af4601a2c17,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761992759192669085,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-pre
load-753124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b53a7244883ec106754af4601a2c17,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 07b53a7244883ec106754af4601a2c17,kubernetes.io/config.seen: 2025-11-01T10:25:58.266169583Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bab7ec1fb6eb9120c8d1703c3a6d6d1e5584131352af9e5e99f1a5426123ec9d,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-753124,Uid:24d5d2db3ef9a05c53557e954b2a6eee,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761992759169015125,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-753124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24d5d2db3ef9a05c53557e954b2a6eee,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.18:8443,kubernetes.io/config.hash: 24d5d2db3ef9a05c5
3557e954b2a6eee,kubernetes.io/config.seen: 2025-11-01T10:25:58.266172684Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cb6586289f32c811d6e46e6e83867decd9bb0b21ae3c1fecec949c242bee6469,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-753124,Uid:0281adea9f6581a64c5ec76d30b1eb9f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1761992759168129570,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-753124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0281adea9f6581a64c5ec76d30b1eb9f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0281adea9f6581a64c5ec76d30b1eb9f,kubernetes.io/config.seen: 2025-11-01T10:25:58.266174185Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=9d070ff1-432d-4855-b9d3-66c6a8bcfa4f name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 01 10:26:17 test-preload-753124 crio[839]: time="2025-11-01 10:26:17.480507647Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761992777480450149,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=06e0d022-dd4f-4c75-8c12-f9003d7d756f name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:26:17 test-preload-753124 crio[839]: time="2025-11-01 10:26:17.481191222Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4c9b3797-388a-455c-8ef7-50922bde7394 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:26:17 test-preload-753124 crio[839]: time="2025-11-01 10:26:17.481268063Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4c9b3797-388a-455c-8ef7-50922bde7394 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:26:17 test-preload-753124 crio[839]: time="2025-11-01 10:26:17.481431069Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1e23072530371ced9081c49f485d95ac21f1fbbfa4d0128bfae860b64652c59c,PodSandboxId:4a20385ab158108811ba34d2d9857a169f09ad5423b195a08a627707f48a0262,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761992765338259318,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bl94g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b5d8404-3426-4ae8-b0b1-c3ddcf975621,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3dfb2eac66c5619339d82f5454a6853effa3602dacc2708afef431084559354,PodSandboxId:2d9d732a713d9afe56b11704a82b1809096549e41cabcadb185f9d47ce14f2ac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1761992763782970646,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vdxtw,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4f78c451-f205-4bb7-96cf-d5872ba7cacc,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c136622e1adc1d39a62eaed17144b8c2bad5a0a5ea3887d881241297b3a18b76,PodSandboxId:5cfc4ecf4098f627db144f83753fe3bdce7e890d19a226168f8b7fabb6c4ba4b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761992763785249669,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8
c07910-5a5a-4e07-877f-ff6d558c8b02,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f3e41d72bdc43798155f918704c30dff7c6729b915ec05a1688f4dd54af7469,PodSandboxId:a872781df9b2622e60ab27f0cf9b93d8674296accc319b82bbd8a9a9c11424ce,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1761992759514912256,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-753124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cafb7ab59e3528f349ce32ff42adfeab,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02e321036dd1962a89fa40b256b5fcc6dfa244ba7748875c2de62405d3014887,PodSandboxId:bab7ec1fb6eb9120c8d1703c3a6d6d1e5584131352af9e5e99f1a5426123ec9d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1761992759423598215,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-753124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24d5d2db3ef9a05c53557e954b2a6eee,},Annotations:map
[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e828e929b14fa589b04b22c7382053a0dc422c24954582b1b792db6ead7f46c8,PodSandboxId:881345ca41f8f001444a65537f4c32b8ee0130b48a23554a01181b7e25165363,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1761992759453538020,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-753124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b53a7244883ec106754af4601a2c17,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3f0a1b16dea44ed9d3ffe65a4952f43ddce5c1cbc2c6e6b0c3f7f0ec457a556,PodSandboxId:cb6586289f32c811d6e46e6e83867decd9bb0b21ae3c1fecec949c242bee6469,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1761992759397546070,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-753124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0281adea9f6581a64c5ec76d30b1eb9f,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4c9b3797-388a-455c-8ef7-50922bde7394 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:26:17 test-preload-753124 crio[839]: time="2025-11-01 10:26:17.481565235Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dee8a22a-a5cc-40f2-a4b2-7c695d95557d name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:26:17 test-preload-753124 crio[839]: time="2025-11-01 10:26:17.481678414Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dee8a22a-a5cc-40f2-a4b2-7c695d95557d name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:26:17 test-preload-753124 crio[839]: time="2025-11-01 10:26:17.481872156Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1e23072530371ced9081c49f485d95ac21f1fbbfa4d0128bfae860b64652c59c,PodSandboxId:4a20385ab158108811ba34d2d9857a169f09ad5423b195a08a627707f48a0262,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761992765338259318,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bl94g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b5d8404-3426-4ae8-b0b1-c3ddcf975621,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3dfb2eac66c5619339d82f5454a6853effa3602dacc2708afef431084559354,PodSandboxId:2d9d732a713d9afe56b11704a82b1809096549e41cabcadb185f9d47ce14f2ac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1761992763782970646,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vdxtw,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4f78c451-f205-4bb7-96cf-d5872ba7cacc,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c136622e1adc1d39a62eaed17144b8c2bad5a0a5ea3887d881241297b3a18b76,PodSandboxId:5cfc4ecf4098f627db144f83753fe3bdce7e890d19a226168f8b7fabb6c4ba4b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761992763785249669,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8
c07910-5a5a-4e07-877f-ff6d558c8b02,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f3e41d72bdc43798155f918704c30dff7c6729b915ec05a1688f4dd54af7469,PodSandboxId:a872781df9b2622e60ab27f0cf9b93d8674296accc319b82bbd8a9a9c11424ce,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1761992759514912256,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-753124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cafb7ab59e3528f349ce32ff42adfeab,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02e321036dd1962a89fa40b256b5fcc6dfa244ba7748875c2de62405d3014887,PodSandboxId:bab7ec1fb6eb9120c8d1703c3a6d6d1e5584131352af9e5e99f1a5426123ec9d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1761992759423598215,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-753124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24d5d2db3ef9a05c53557e954b2a6eee,},Annotations:map
[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e828e929b14fa589b04b22c7382053a0dc422c24954582b1b792db6ead7f46c8,PodSandboxId:881345ca41f8f001444a65537f4c32b8ee0130b48a23554a01181b7e25165363,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1761992759453538020,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-753124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b53a7244883ec106754af4601a2c17,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3f0a1b16dea44ed9d3ffe65a4952f43ddce5c1cbc2c6e6b0c3f7f0ec457a556,PodSandboxId:cb6586289f32c811d6e46e6e83867decd9bb0b21ae3c1fecec949c242bee6469,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1761992759397546070,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-753124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0281adea9f6581a64c5ec76d30b1eb9f,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dee8a22a-a5cc-40f2-a4b2-7c695d95557d name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:26:17 test-preload-753124 crio[839]: time="2025-11-01 10:26:17.526546865Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=616ce965-c65b-4803-a6d5-9d4b5d58a667 name=/runtime.v1.RuntimeService/Version
	Nov 01 10:26:17 test-preload-753124 crio[839]: time="2025-11-01 10:26:17.526644942Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=616ce965-c65b-4803-a6d5-9d4b5d58a667 name=/runtime.v1.RuntimeService/Version
	Nov 01 10:26:17 test-preload-753124 crio[839]: time="2025-11-01 10:26:17.527798079Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5f21f200-5874-4c12-b52a-d77046780043 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:26:17 test-preload-753124 crio[839]: time="2025-11-01 10:26:17.528228586Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761992777528203299,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5f21f200-5874-4c12-b52a-d77046780043 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:26:17 test-preload-753124 crio[839]: time="2025-11-01 10:26:17.528853844Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2ef4c051-7d2b-446c-862a-2f9ea47cedba name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:26:17 test-preload-753124 crio[839]: time="2025-11-01 10:26:17.528926514Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2ef4c051-7d2b-446c-862a-2f9ea47cedba name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:26:17 test-preload-753124 crio[839]: time="2025-11-01 10:26:17.529078267Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1e23072530371ced9081c49f485d95ac21f1fbbfa4d0128bfae860b64652c59c,PodSandboxId:4a20385ab158108811ba34d2d9857a169f09ad5423b195a08a627707f48a0262,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761992765338259318,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bl94g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b5d8404-3426-4ae8-b0b1-c3ddcf975621,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3dfb2eac66c5619339d82f5454a6853effa3602dacc2708afef431084559354,PodSandboxId:2d9d732a713d9afe56b11704a82b1809096549e41cabcadb185f9d47ce14f2ac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1761992763782970646,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vdxtw,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4f78c451-f205-4bb7-96cf-d5872ba7cacc,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c136622e1adc1d39a62eaed17144b8c2bad5a0a5ea3887d881241297b3a18b76,PodSandboxId:5cfc4ecf4098f627db144f83753fe3bdce7e890d19a226168f8b7fabb6c4ba4b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761992763785249669,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8
c07910-5a5a-4e07-877f-ff6d558c8b02,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f3e41d72bdc43798155f918704c30dff7c6729b915ec05a1688f4dd54af7469,PodSandboxId:a872781df9b2622e60ab27f0cf9b93d8674296accc319b82bbd8a9a9c11424ce,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1761992759514912256,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-753124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cafb7ab59e3528f349ce32ff42adfeab,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02e321036dd1962a89fa40b256b5fcc6dfa244ba7748875c2de62405d3014887,PodSandboxId:bab7ec1fb6eb9120c8d1703c3a6d6d1e5584131352af9e5e99f1a5426123ec9d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1761992759423598215,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-753124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24d5d2db3ef9a05c53557e954b2a6eee,},Annotations:map
[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e828e929b14fa589b04b22c7382053a0dc422c24954582b1b792db6ead7f46c8,PodSandboxId:881345ca41f8f001444a65537f4c32b8ee0130b48a23554a01181b7e25165363,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1761992759453538020,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-753124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b53a7244883ec106754af4601a2c17,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3f0a1b16dea44ed9d3ffe65a4952f43ddce5c1cbc2c6e6b0c3f7f0ec457a556,PodSandboxId:cb6586289f32c811d6e46e6e83867decd9bb0b21ae3c1fecec949c242bee6469,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1761992759397546070,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-753124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0281adea9f6581a64c5ec76d30b1eb9f,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2ef4c051-7d2b-446c-862a-2f9ea47cedba name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:26:17 test-preload-753124 crio[839]: time="2025-11-01 10:26:17.566272606Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a2f158b5-f9c7-42bb-a58b-5d18ae22761f name=/runtime.v1.RuntimeService/Version
	Nov 01 10:26:17 test-preload-753124 crio[839]: time="2025-11-01 10:26:17.566354757Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a2f158b5-f9c7-42bb-a58b-5d18ae22761f name=/runtime.v1.RuntimeService/Version
	Nov 01 10:26:17 test-preload-753124 crio[839]: time="2025-11-01 10:26:17.567886578Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b1727dfb-bfeb-4afe-a97c-ae7fad6c8c03 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:26:17 test-preload-753124 crio[839]: time="2025-11-01 10:26:17.568609110Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761992777568583526,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b1727dfb-bfeb-4afe-a97c-ae7fad6c8c03 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:26:17 test-preload-753124 crio[839]: time="2025-11-01 10:26:17.569130177Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=658c3a93-e9fc-4120-9f3c-19eda0b366ce name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:26:17 test-preload-753124 crio[839]: time="2025-11-01 10:26:17.569183137Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=658c3a93-e9fc-4120-9f3c-19eda0b366ce name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:26:17 test-preload-753124 crio[839]: time="2025-11-01 10:26:17.569325648Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1e23072530371ced9081c49f485d95ac21f1fbbfa4d0128bfae860b64652c59c,PodSandboxId:4a20385ab158108811ba34d2d9857a169f09ad5423b195a08a627707f48a0262,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761992765338259318,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-bl94g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2b5d8404-3426-4ae8-b0b1-c3ddcf975621,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3dfb2eac66c5619339d82f5454a6853effa3602dacc2708afef431084559354,PodSandboxId:2d9d732a713d9afe56b11704a82b1809096549e41cabcadb185f9d47ce14f2ac,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1761992763782970646,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vdxtw,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 4f78c451-f205-4bb7-96cf-d5872ba7cacc,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c136622e1adc1d39a62eaed17144b8c2bad5a0a5ea3887d881241297b3a18b76,PodSandboxId:5cfc4ecf4098f627db144f83753fe3bdce7e890d19a226168f8b7fabb6c4ba4b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761992763785249669,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8
c07910-5a5a-4e07-877f-ff6d558c8b02,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f3e41d72bdc43798155f918704c30dff7c6729b915ec05a1688f4dd54af7469,PodSandboxId:a872781df9b2622e60ab27f0cf9b93d8674296accc319b82bbd8a9a9c11424ce,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1761992759514912256,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-753124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cafb7ab59e3528f349ce32ff42adfeab,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02e321036dd1962a89fa40b256b5fcc6dfa244ba7748875c2de62405d3014887,PodSandboxId:bab7ec1fb6eb9120c8d1703c3a6d6d1e5584131352af9e5e99f1a5426123ec9d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1761992759423598215,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-753124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24d5d2db3ef9a05c53557e954b2a6eee,},Annotations:map
[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e828e929b14fa589b04b22c7382053a0dc422c24954582b1b792db6ead7f46c8,PodSandboxId:881345ca41f8f001444a65537f4c32b8ee0130b48a23554a01181b7e25165363,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1761992759453538020,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-753124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07b53a7244883ec106754af4601a2c17,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3f0a1b16dea44ed9d3ffe65a4952f43ddce5c1cbc2c6e6b0c3f7f0ec457a556,PodSandboxId:cb6586289f32c811d6e46e6e83867decd9bb0b21ae3c1fecec949c242bee6469,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1761992759397546070,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-753124,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0281adea9f6581a64c5ec76d30b1eb9f,},Annotation
s:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=658c3a93-e9fc-4120-9f3c-19eda0b366ce name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1e23072530371       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   12 seconds ago      Running             coredns                   1                   4a20385ab1581       coredns-668d6bf9bc-bl94g
	c136622e1adc1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 seconds ago      Running             storage-provisioner       1                   5cfc4ecf4098f       storage-provisioner
	e3dfb2eac66c5       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   13 seconds ago      Running             kube-proxy                1                   2d9d732a713d9       kube-proxy-vdxtw
	2f3e41d72bdc4       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   18 seconds ago      Running             etcd                      1                   a872781df9b26       etcd-test-preload-753124
	e828e929b14fa       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   18 seconds ago      Running             kube-scheduler            1                   881345ca41f8f       kube-scheduler-test-preload-753124
	02e321036dd19       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   18 seconds ago      Running             kube-apiserver            1                   bab7ec1fb6eb9       kube-apiserver-test-preload-753124
	a3f0a1b16dea4       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   18 seconds ago      Running             kube-controller-manager   1                   cb6586289f32c       kube-controller-manager-test-preload-753124
	
	
	==> coredns [1e23072530371ced9081c49f485d95ac21f1fbbfa4d0128bfae860b64652c59c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:56119 - 56798 "HINFO IN 4520406892200910418.6024829667986149224. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.05987975s
	
	
	==> describe nodes <==
	Name:               test-preload-753124
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-753124
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d
	                    minikube.k8s.io/name=test-preload-753124
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_25_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:25:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-753124
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:26:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:26:04 +0000   Sat, 01 Nov 2025 10:25:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:26:04 +0000   Sat, 01 Nov 2025 10:25:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:26:04 +0000   Sat, 01 Nov 2025 10:25:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:26:04 +0000   Sat, 01 Nov 2025 10:26:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.18
	  Hostname:    test-preload-753124
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 6cd17a3156b14f0c9e4002494182f555
	  System UUID:                6cd17a31-56b1-4f0c-9e40-02494182f555
	  Boot ID:                    25ca0054-ff08-4822-abab-b5fe54481b33
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-bl94g                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     68s
	  kube-system                 etcd-test-preload-753124                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         75s
	  kube-system                 kube-apiserver-test-preload-753124             250m (12%)    0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 kube-controller-manager-test-preload-753124    200m (10%)    0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 kube-proxy-vdxtw                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         69s
	  kube-system                 kube-scheduler-test-preload-753124             100m (5%)     0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         68s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 67s                kube-proxy       
	  Normal   Starting                 13s                kube-proxy       
	  Normal   NodeHasSufficientMemory  79s (x8 over 79s)  kubelet          Node test-preload-753124 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    79s (x8 over 79s)  kubelet          Node test-preload-753124 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     79s (x7 over 79s)  kubelet          Node test-preload-753124 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  79s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     73s                kubelet          Node test-preload-753124 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  73s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  73s                kubelet          Node test-preload-753124 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    73s                kubelet          Node test-preload-753124 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 73s                kubelet          Starting kubelet.
	  Normal   NodeReady                72s                kubelet          Node test-preload-753124 status is now: NodeReady
	  Normal   RegisteredNode           69s                node-controller  Node test-preload-753124 event: Registered Node test-preload-753124 in Controller
	  Normal   Starting                 19s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  19s (x8 over 19s)  kubelet          Node test-preload-753124 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19s (x8 over 19s)  kubelet          Node test-preload-753124 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     19s (x7 over 19s)  kubelet          Node test-preload-753124 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  19s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 15s                kubelet          Node test-preload-753124 has been rebooted, boot id: 25ca0054-ff08-4822-abab-b5fe54481b33
	  Normal   RegisteredNode           12s                node-controller  Node test-preload-753124 event: Registered Node test-preload-753124 in Controller
	
	
	==> dmesg <==
	[Nov 1 10:25] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000051] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.001771] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.889371] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.084945] kauditd_printk_skb: 4 callbacks suppressed
	[  +0.100618] kauditd_printk_skb: 102 callbacks suppressed
	[Nov 1 10:26] kauditd_printk_skb: 177 callbacks suppressed
	[ +10.895542] kauditd_printk_skb: 203 callbacks suppressed
	
	
	==> etcd [2f3e41d72bdc43798155f918704c30dff7c6729b915ec05a1688f4dd54af7469] <==
	{"level":"info","ts":"2025-11-01T10:25:59.969155Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-01T10:25:59.972420Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T10:25:59.978989Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T10:25:59.979011Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-01T10:25:59.993994Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-01T10:25:59.994310Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"d6d01a71dfc61a14","initial-advertise-peer-urls":["https://192.168.39.18:2380"],"listen-peer-urls":["https://192.168.39.18:2380"],"advertise-client-urls":["https://192.168.39.18:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.18:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-01T10:25:59.994367Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-01T10:25:59.994484Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.18:2380"}
	{"level":"info","ts":"2025-11-01T10:25:59.994514Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.18:2380"}
	{"level":"info","ts":"2025-11-01T10:26:01.116385Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d6d01a71dfc61a14 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-01T10:26:01.116444Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d6d01a71dfc61a14 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-01T10:26:01.116479Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d6d01a71dfc61a14 received MsgPreVoteResp from d6d01a71dfc61a14 at term 2"}
	{"level":"info","ts":"2025-11-01T10:26:01.116492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d6d01a71dfc61a14 became candidate at term 3"}
	{"level":"info","ts":"2025-11-01T10:26:01.116508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d6d01a71dfc61a14 received MsgVoteResp from d6d01a71dfc61a14 at term 3"}
	{"level":"info","ts":"2025-11-01T10:26:01.116517Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d6d01a71dfc61a14 became leader at term 3"}
	{"level":"info","ts":"2025-11-01T10:26:01.116523Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d6d01a71dfc61a14 elected leader d6d01a71dfc61a14 at term 3"}
	{"level":"info","ts":"2025-11-01T10:26:01.118165Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"d6d01a71dfc61a14","local-member-attributes":"{Name:test-preload-753124 ClientURLs:[https://192.168.39.18:2379]}","request-path":"/0/members/d6d01a71dfc61a14/attributes","cluster-id":"3959cc3c468ccbd1","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-01T10:26:01.118296Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T10:26:01.118451Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T10:26:01.119376Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-01T10:26:01.119546Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-01T10:26:01.119578Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-01T10:26:01.120012Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-01T10:26:01.120615Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-01T10:26:01.120783Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.18:2379"}
	
	
	==> kernel <==
	 10:26:17 up 0 min,  0 users,  load average: 0.50, 0.15, 0.05
	Linux test-preload-753124 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Oct 28 16:58:05 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [02e321036dd1962a89fa40b256b5fcc6dfa244ba7748875c2de62405d3014887] <==
	I1101 10:26:02.368339       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 10:26:02.382452       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1101 10:26:02.385668       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 10:26:02.386037       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 10:26:02.386127       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 10:26:02.413065       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1101 10:26:02.413173       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1101 10:26:02.416529       1 aggregator.go:171] initial CRD sync complete...
	I1101 10:26:02.416562       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 10:26:02.416569       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:26:02.416575       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:26:02.420594       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 10:26:02.433207       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1101 10:26:02.433253       1 policy_source.go:240] refreshing policies
	I1101 10:26:02.437650       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E1101 10:26:02.464132       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 10:26:03.263734       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 10:26:03.342619       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1101 10:26:04.136093       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1101 10:26:04.191642       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1101 10:26:04.238648       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:26:04.252546       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:26:05.888190       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1101 10:26:05.934234       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:26:05.982063       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [a3f0a1b16dea44ed9d3ffe65a4952f43ddce5c1cbc2c6e6b0c3f7f0ec457a556] <==
	I1101 10:26:05.521526       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I1101 10:26:05.522165       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="test-preload-753124"
	I1101 10:26:05.522206       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 10:26:05.523772       1 shared_informer.go:320] Caches are synced for PVC protection
	I1101 10:26:05.525977       1 shared_informer.go:320] Caches are synced for persistent volume
	I1101 10:26:05.530330       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1101 10:26:05.533036       1 shared_informer.go:320] Caches are synced for daemon sets
	I1101 10:26:05.538070       1 shared_informer.go:320] Caches are synced for stateful set
	I1101 10:26:05.541149       1 shared_informer.go:320] Caches are synced for cronjob
	I1101 10:26:05.541230       1 shared_informer.go:320] Caches are synced for namespace
	I1101 10:26:05.550950       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I1101 10:26:05.551747       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1101 10:26:05.552302       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1101 10:26:05.569639       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1101 10:26:05.569934       1 shared_informer.go:320] Caches are synced for endpoint
	I1101 10:26:05.578002       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1101 10:26:05.620899       1 shared_informer.go:320] Caches are synced for garbage collector
	I1101 10:26:05.620943       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:26:05.620951       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:26:05.621108       1 shared_informer.go:320] Caches are synced for garbage collector
	I1101 10:26:05.915798       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="385.331111ms"
	I1101 10:26:05.918018       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="36.394µs"
	I1101 10:26:06.458165       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="129.489µs"
	I1101 10:26:06.496524       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="16.328908ms"
	I1101 10:26:06.497996       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="191.522µs"
	
	
	==> kube-proxy [e3dfb2eac66c5619339d82f5454a6853effa3602dacc2708afef431084559354] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1101 10:26:04.083867       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1101 10:26:04.104808       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.18"]
	E1101 10:26:04.105089       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:26:04.171898       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1101 10:26:04.172018       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 10:26:04.172090       1 server_linux.go:170] "Using iptables Proxier"
	I1101 10:26:04.178975       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:26:04.181087       1 server.go:497] "Version info" version="v1.32.0"
	I1101 10:26:04.181322       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:26:04.187585       1 config.go:199] "Starting service config controller"
	I1101 10:26:04.187610       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1101 10:26:04.187646       1 config.go:105] "Starting endpoint slice config controller"
	I1101 10:26:04.187651       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1101 10:26:04.191550       1 config.go:329] "Starting node config controller"
	I1101 10:26:04.191663       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1101 10:26:04.287779       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1101 10:26:04.287799       1 shared_informer.go:320] Caches are synced for service config
	I1101 10:26:04.295388       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e828e929b14fa589b04b22c7382053a0dc422c24954582b1b792db6ead7f46c8] <==
	I1101 10:26:00.785936       1 serving.go:386] Generated self-signed cert in-memory
	W1101 10:26:02.303185       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 10:26:02.303268       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 10:26:02.303290       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 10:26:02.303311       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 10:26:02.427615       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1101 10:26:02.428486       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:26:02.435490       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:26:02.435942       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1101 10:26:02.435962       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:26:02.437862       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1101 10:26:02.545927       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 01 10:26:02 test-preload-753124 kubelet[1166]: I1101 10:26:02.496559    1166 setters.go:602] "Node became not ready" node="test-preload-753124" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-01T10:26:02Z","lastTransitionTime":"2025-11-01T10:26:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"}
	Nov 01 10:26:02 test-preload-753124 kubelet[1166]: E1101 10:26:02.514499    1166 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"test-preload-753124\" not found"
	Nov 01 10:26:02 test-preload-753124 kubelet[1166]: E1101 10:26:02.615238    1166 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"test-preload-753124\" not found"
	Nov 01 10:26:02 test-preload-753124 kubelet[1166]: E1101 10:26:02.716217    1166 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"test-preload-753124\" not found"
	Nov 01 10:26:02 test-preload-753124 kubelet[1166]: I1101 10:26:02.780796    1166 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-test-preload-753124"
	Nov 01 10:26:02 test-preload-753124 kubelet[1166]: E1101 10:26:02.797461    1166 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-753124\" already exists" pod="kube-system/kube-scheduler-test-preload-753124"
	Nov 01 10:26:02 test-preload-753124 kubelet[1166]: I1101 10:26:02.797487    1166 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-test-preload-753124"
	Nov 01 10:26:02 test-preload-753124 kubelet[1166]: E1101 10:26:02.807099    1166 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-753124\" already exists" pod="kube-system/kube-apiserver-test-preload-753124"
	Nov 01 10:26:02 test-preload-753124 kubelet[1166]: I1101 10:26:02.807134    1166 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-test-preload-753124"
	Nov 01 10:26:02 test-preload-753124 kubelet[1166]: E1101 10:26:02.818647    1166 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-test-preload-753124\" already exists" pod="kube-system/kube-controller-manager-test-preload-753124"
	Nov 01 10:26:02 test-preload-753124 kubelet[1166]: I1101 10:26:02.818671    1166 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-753124"
	Nov 01 10:26:02 test-preload-753124 kubelet[1166]: E1101 10:26:02.828997    1166 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-test-preload-753124\" already exists" pod="kube-system/etcd-test-preload-753124"
	Nov 01 10:26:03 test-preload-753124 kubelet[1166]: I1101 10:26:03.259364    1166 apiserver.go:52] "Watching apiserver"
	Nov 01 10:26:03 test-preload-753124 kubelet[1166]: E1101 10:26:03.267911    1166 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-bl94g" podUID="2b5d8404-3426-4ae8-b0b1-c3ddcf975621"
	Nov 01 10:26:03 test-preload-753124 kubelet[1166]: I1101 10:26:03.280075    1166 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Nov 01 10:26:03 test-preload-753124 kubelet[1166]: I1101 10:26:03.334313    1166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e8c07910-5a5a-4e07-877f-ff6d558c8b02-tmp\") pod \"storage-provisioner\" (UID: \"e8c07910-5a5a-4e07-877f-ff6d558c8b02\") " pod="kube-system/storage-provisioner"
	Nov 01 10:26:03 test-preload-753124 kubelet[1166]: I1101 10:26:03.334368    1166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f78c451-f205-4bb7-96cf-d5872ba7cacc-xtables-lock\") pod \"kube-proxy-vdxtw\" (UID: \"4f78c451-f205-4bb7-96cf-d5872ba7cacc\") " pod="kube-system/kube-proxy-vdxtw"
	Nov 01 10:26:03 test-preload-753124 kubelet[1166]: I1101 10:26:03.334387    1166 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f78c451-f205-4bb7-96cf-d5872ba7cacc-lib-modules\") pod \"kube-proxy-vdxtw\" (UID: \"4f78c451-f205-4bb7-96cf-d5872ba7cacc\") " pod="kube-system/kube-proxy-vdxtw"
	Nov 01 10:26:03 test-preload-753124 kubelet[1166]: E1101 10:26:03.334998    1166 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 01 10:26:03 test-preload-753124 kubelet[1166]: E1101 10:26:03.335375    1166 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2b5d8404-3426-4ae8-b0b1-c3ddcf975621-config-volume podName:2b5d8404-3426-4ae8-b0b1-c3ddcf975621 nodeName:}" failed. No retries permitted until 2025-11-01 10:26:03.835346306 +0000 UTC m=+5.686133904 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2b5d8404-3426-4ae8-b0b1-c3ddcf975621-config-volume") pod "coredns-668d6bf9bc-bl94g" (UID: "2b5d8404-3426-4ae8-b0b1-c3ddcf975621") : object "kube-system"/"coredns" not registered
	Nov 01 10:26:03 test-preload-753124 kubelet[1166]: E1101 10:26:03.841035    1166 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 01 10:26:03 test-preload-753124 kubelet[1166]: E1101 10:26:03.841100    1166 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2b5d8404-3426-4ae8-b0b1-c3ddcf975621-config-volume podName:2b5d8404-3426-4ae8-b0b1-c3ddcf975621 nodeName:}" failed. No retries permitted until 2025-11-01 10:26:04.841085048 +0000 UTC m=+6.691872658 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2b5d8404-3426-4ae8-b0b1-c3ddcf975621-config-volume") pod "coredns-668d6bf9bc-bl94g" (UID: "2b5d8404-3426-4ae8-b0b1-c3ddcf975621") : object "kube-system"/"coredns" not registered
	Nov 01 10:26:04 test-preload-753124 kubelet[1166]: I1101 10:26:04.071047    1166 kubelet_node_status.go:502] "Fast updating node status as it just became ready"
	Nov 01 10:26:08 test-preload-753124 kubelet[1166]: E1101 10:26:08.355627    1166 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761992768355208500,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 01 10:26:08 test-preload-753124 kubelet[1166]: E1101 10:26:08.356057    1166 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761992768355208500,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [c136622e1adc1d39a62eaed17144b8c2bad5a0a5ea3887d881241297b3a18b76] <==
	I1101 10:26:03.908972       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-753124 -n test-preload-753124
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-753124 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-753124" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-753124
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-753124: (1.018410183s)
--- FAIL: TestPreload (128.58s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (83.55s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-876158 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-876158 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m16.87740811s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-876158] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21832
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21832-344560/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-344560/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-876158" primary control-plane node in "pause-876158" cluster
	* Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-876158" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:32:43.028640  379731 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:32:43.028820  379731 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:32:43.028835  379731 out.go:374] Setting ErrFile to fd 2...
	I1101 10:32:43.028842  379731 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:32:43.029181  379731 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-344560/.minikube/bin
	I1101 10:32:43.029849  379731 out.go:368] Setting JSON to false
	I1101 10:32:43.031357  379731 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8111,"bootTime":1761985052,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:32:43.031514  379731 start.go:143] virtualization: kvm guest
	I1101 10:32:43.033458  379731 out.go:179] * [pause-876158] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:32:43.035361  379731 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 10:32:43.035371  379731 notify.go:221] Checking for updates...
	I1101 10:32:43.037968  379731 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:32:43.039717  379731 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-344560/kubeconfig
	I1101 10:32:43.041075  379731 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-344560/.minikube
	I1101 10:32:43.042740  379731 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:32:43.044149  379731 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:32:43.046293  379731 config.go:182] Loaded profile config "pause-876158": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:32:43.047160  379731 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:32:43.100587  379731 out.go:179] * Using the kvm2 driver based on existing profile
	I1101 10:32:43.101900  379731 start.go:309] selected driver: kvm2
	I1101 10:32:43.101923  379731 start.go:930] validating driver "kvm2" against &{Name:pause-876158 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.1 ClusterName:pause-876158 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.174 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:32:43.102150  379731 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:32:43.103307  379731 cni.go:84] Creating CNI manager for ""
	I1101 10:32:43.103383  379731 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 10:32:43.103474  379731 start.go:353] cluster config:
	{Name:pause-876158 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-876158 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.174 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:32:43.103677  379731 iso.go:125] acquiring lock: {Name:mkc74493fbbc2007c645c4ed6349cf76e7fb2185 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:32:43.106315  379731 out.go:179] * Starting "pause-876158" primary control-plane node in "pause-876158" cluster
	I1101 10:32:43.107835  379731 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:32:43.107929  379731 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21832-344560/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 10:32:43.107946  379731 cache.go:59] Caching tarball of preloaded images
	I1101 10:32:43.108112  379731 preload.go:233] Found /home/jenkins/minikube-integration/21832-344560/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 10:32:43.108129  379731 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:32:43.108286  379731 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/pause-876158/config.json ...
	I1101 10:32:43.108536  379731 start.go:360] acquireMachinesLock for pause-876158: {Name:mkd221a68334bc82c567a9a06c8563af1e1c38bd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 10:33:03.306249  379731 start.go:364] duration metric: took 20.197665391s to acquireMachinesLock for "pause-876158"
	I1101 10:33:03.306324  379731 start.go:96] Skipping create...Using existing machine configuration
	I1101 10:33:03.306332  379731 fix.go:54] fixHost starting: 
	I1101 10:33:03.308739  379731 fix.go:112] recreateIfNeeded on pause-876158: state=Running err=<nil>
	W1101 10:33:03.308775  379731 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 10:33:03.310593  379731 out.go:252] * Updating the running kvm2 "pause-876158" VM ...
	I1101 10:33:03.310628  379731 machine.go:94] provisionDockerMachine start ...
	I1101 10:33:03.314594  379731 main.go:143] libmachine: domain pause-876158 has defined MAC address 52:54:00:82:de:0e in network mk-pause-876158
	I1101 10:33:03.315198  379731 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:82:de:0e", ip: ""} in network mk-pause-876158: {Iface:virbr4 ExpiryTime:2025-11-01 11:31:35 +0000 UTC Type:0 Mac:52:54:00:82:de:0e Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:pause-876158 Clientid:01:52:54:00:82:de:0e}
	I1101 10:33:03.315237  379731 main.go:143] libmachine: domain pause-876158 has defined IP address 192.168.72.174 and MAC address 52:54:00:82:de:0e in network mk-pause-876158
	I1101 10:33:03.315638  379731 main.go:143] libmachine: Using SSH client type: native
	I1101 10:33:03.315954  379731 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.72.174 22 <nil> <nil>}
	I1101 10:33:03.315968  379731 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:33:03.437254  379731 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-876158
	
	I1101 10:33:03.437291  379731 buildroot.go:166] provisioning hostname "pause-876158"
	I1101 10:33:03.440430  379731 main.go:143] libmachine: domain pause-876158 has defined MAC address 52:54:00:82:de:0e in network mk-pause-876158
	I1101 10:33:03.441001  379731 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:82:de:0e", ip: ""} in network mk-pause-876158: {Iface:virbr4 ExpiryTime:2025-11-01 11:31:35 +0000 UTC Type:0 Mac:52:54:00:82:de:0e Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:pause-876158 Clientid:01:52:54:00:82:de:0e}
	I1101 10:33:03.441033  379731 main.go:143] libmachine: domain pause-876158 has defined IP address 192.168.72.174 and MAC address 52:54:00:82:de:0e in network mk-pause-876158
	I1101 10:33:03.441323  379731 main.go:143] libmachine: Using SSH client type: native
	I1101 10:33:03.441554  379731 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.72.174 22 <nil> <nil>}
	I1101 10:33:03.441575  379731 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-876158 && echo "pause-876158" | sudo tee /etc/hostname
	I1101 10:33:03.585946  379731 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-876158
	
	I1101 10:33:03.589057  379731 main.go:143] libmachine: domain pause-876158 has defined MAC address 52:54:00:82:de:0e in network mk-pause-876158
	I1101 10:33:03.589497  379731 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:82:de:0e", ip: ""} in network mk-pause-876158: {Iface:virbr4 ExpiryTime:2025-11-01 11:31:35 +0000 UTC Type:0 Mac:52:54:00:82:de:0e Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:pause-876158 Clientid:01:52:54:00:82:de:0e}
	I1101 10:33:03.589540  379731 main.go:143] libmachine: domain pause-876158 has defined IP address 192.168.72.174 and MAC address 52:54:00:82:de:0e in network mk-pause-876158
	I1101 10:33:03.589771  379731 main.go:143] libmachine: Using SSH client type: native
	I1101 10:33:03.589995  379731 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.72.174 22 <nil> <nil>}
	I1101 10:33:03.590011  379731 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-876158' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-876158/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-876158' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:33:03.710062  379731 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:33:03.710094  379731 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21832-344560/.minikube CaCertPath:/home/jenkins/minikube-integration/21832-344560/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21832-344560/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21832-344560/.minikube}
	I1101 10:33:03.710129  379731 buildroot.go:174] setting up certificates
	I1101 10:33:03.710141  379731 provision.go:84] configureAuth start
	I1101 10:33:03.713413  379731 main.go:143] libmachine: domain pause-876158 has defined MAC address 52:54:00:82:de:0e in network mk-pause-876158
	I1101 10:33:03.713976  379731 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:82:de:0e", ip: ""} in network mk-pause-876158: {Iface:virbr4 ExpiryTime:2025-11-01 11:31:35 +0000 UTC Type:0 Mac:52:54:00:82:de:0e Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:pause-876158 Clientid:01:52:54:00:82:de:0e}
	I1101 10:33:03.714013  379731 main.go:143] libmachine: domain pause-876158 has defined IP address 192.168.72.174 and MAC address 52:54:00:82:de:0e in network mk-pause-876158
	I1101 10:33:03.717216  379731 main.go:143] libmachine: domain pause-876158 has defined MAC address 52:54:00:82:de:0e in network mk-pause-876158
	I1101 10:33:03.717666  379731 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:82:de:0e", ip: ""} in network mk-pause-876158: {Iface:virbr4 ExpiryTime:2025-11-01 11:31:35 +0000 UTC Type:0 Mac:52:54:00:82:de:0e Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:pause-876158 Clientid:01:52:54:00:82:de:0e}
	I1101 10:33:03.717696  379731 main.go:143] libmachine: domain pause-876158 has defined IP address 192.168.72.174 and MAC address 52:54:00:82:de:0e in network mk-pause-876158
	I1101 10:33:03.717834  379731 provision.go:143] copyHostCerts
	I1101 10:33:03.717904  379731 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-344560/.minikube/ca.pem, removing ...
	I1101 10:33:03.717922  379731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-344560/.minikube/ca.pem
	I1101 10:33:03.717983  379731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21832-344560/.minikube/ca.pem (1082 bytes)
	I1101 10:33:03.718079  379731 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-344560/.minikube/cert.pem, removing ...
	I1101 10:33:03.718089  379731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-344560/.minikube/cert.pem
	I1101 10:33:03.718117  379731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21832-344560/.minikube/cert.pem (1123 bytes)
	I1101 10:33:03.718169  379731 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-344560/.minikube/key.pem, removing ...
	I1101 10:33:03.718176  379731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-344560/.minikube/key.pem
	I1101 10:33:03.718195  379731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21832-344560/.minikube/key.pem (1679 bytes)
	I1101 10:33:03.718240  379731 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21832-344560/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21832-344560/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21832-344560/.minikube/certs/ca-key.pem org=jenkins.pause-876158 san=[127.0.0.1 192.168.72.174 localhost minikube pause-876158]
	I1101 10:33:04.122308  379731 provision.go:177] copyRemoteCerts
	I1101 10:33:04.122367  379731 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:33:04.125483  379731 main.go:143] libmachine: domain pause-876158 has defined MAC address 52:54:00:82:de:0e in network mk-pause-876158
	I1101 10:33:04.126074  379731 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:82:de:0e", ip: ""} in network mk-pause-876158: {Iface:virbr4 ExpiryTime:2025-11-01 11:31:35 +0000 UTC Type:0 Mac:52:54:00:82:de:0e Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:pause-876158 Clientid:01:52:54:00:82:de:0e}
	I1101 10:33:04.126117  379731 main.go:143] libmachine: domain pause-876158 has defined IP address 192.168.72.174 and MAC address 52:54:00:82:de:0e in network mk-pause-876158
	I1101 10:33:04.126346  379731 sshutil.go:53] new ssh client: &{IP:192.168.72.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/pause-876158/id_rsa Username:docker}
	I1101 10:33:04.218552  379731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1101 10:33:04.261206  379731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 10:33:04.307499  379731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 10:33:04.358681  379731 provision.go:87] duration metric: took 648.522952ms to configureAuth
	I1101 10:33:04.358734  379731 buildroot.go:189] setting minikube options for container-runtime
	I1101 10:33:04.358999  379731 config.go:182] Loaded profile config "pause-876158": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:33:04.362164  379731 main.go:143] libmachine: domain pause-876158 has defined MAC address 52:54:00:82:de:0e in network mk-pause-876158
	I1101 10:33:04.362801  379731 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:82:de:0e", ip: ""} in network mk-pause-876158: {Iface:virbr4 ExpiryTime:2025-11-01 11:31:35 +0000 UTC Type:0 Mac:52:54:00:82:de:0e Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:pause-876158 Clientid:01:52:54:00:82:de:0e}
	I1101 10:33:04.362832  379731 main.go:143] libmachine: domain pause-876158 has defined IP address 192.168.72.174 and MAC address 52:54:00:82:de:0e in network mk-pause-876158
	I1101 10:33:04.363071  379731 main.go:143] libmachine: Using SSH client type: native
	I1101 10:33:04.363344  379731 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.72.174 22 <nil> <nil>}
	I1101 10:33:04.363360  379731 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:33:09.966714  379731 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:33:09.966765  379731 machine.go:97] duration metric: took 6.656112517s to provisionDockerMachine
	I1101 10:33:09.966784  379731 start.go:293] postStartSetup for "pause-876158" (driver="kvm2")
	I1101 10:33:09.966799  379731 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:33:09.966920  379731 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:33:09.970257  379731 main.go:143] libmachine: domain pause-876158 has defined MAC address 52:54:00:82:de:0e in network mk-pause-876158
	I1101 10:33:09.970858  379731 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:82:de:0e", ip: ""} in network mk-pause-876158: {Iface:virbr4 ExpiryTime:2025-11-01 11:31:35 +0000 UTC Type:0 Mac:52:54:00:82:de:0e Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:pause-876158 Clientid:01:52:54:00:82:de:0e}
	I1101 10:33:09.970917  379731 main.go:143] libmachine: domain pause-876158 has defined IP address 192.168.72.174 and MAC address 52:54:00:82:de:0e in network mk-pause-876158
	I1101 10:33:09.971146  379731 sshutil.go:53] new ssh client: &{IP:192.168.72.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/pause-876158/id_rsa Username:docker}
	I1101 10:33:10.066102  379731 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:33:10.073428  379731 info.go:137] Remote host: Buildroot 2025.02
	I1101 10:33:10.073459  379731 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-344560/.minikube/addons for local assets ...
	I1101 10:33:10.073534  379731 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-344560/.minikube/files for local assets ...
	I1101 10:33:10.073637  379731 filesync.go:149] local asset: /home/jenkins/minikube-integration/21832-344560/.minikube/files/etc/ssl/certs/3485182.pem -> 3485182.pem in /etc/ssl/certs
	I1101 10:33:10.073784  379731 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:33:10.090690  379731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/files/etc/ssl/certs/3485182.pem --> /etc/ssl/certs/3485182.pem (1708 bytes)
	I1101 10:33:10.127781  379731 start.go:296] duration metric: took 160.975812ms for postStartSetup
	I1101 10:33:10.127836  379731 fix.go:56] duration metric: took 6.821503842s for fixHost
	I1101 10:33:10.130577  379731 main.go:143] libmachine: domain pause-876158 has defined MAC address 52:54:00:82:de:0e in network mk-pause-876158
	I1101 10:33:10.131095  379731 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:82:de:0e", ip: ""} in network mk-pause-876158: {Iface:virbr4 ExpiryTime:2025-11-01 11:31:35 +0000 UTC Type:0 Mac:52:54:00:82:de:0e Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:pause-876158 Clientid:01:52:54:00:82:de:0e}
	I1101 10:33:10.131123  379731 main.go:143] libmachine: domain pause-876158 has defined IP address 192.168.72.174 and MAC address 52:54:00:82:de:0e in network mk-pause-876158
	I1101 10:33:10.131361  379731 main.go:143] libmachine: Using SSH client type: native
	I1101 10:33:10.131608  379731 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.72.174 22 <nil> <nil>}
	I1101 10:33:10.131622  379731 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1101 10:33:10.245380  379731 main.go:143] libmachine: SSH cmd err, output: <nil>: 1761993190.236530743
	
	I1101 10:33:10.245425  379731 fix.go:216] guest clock: 1761993190.236530743
	I1101 10:33:10.245451  379731 fix.go:229] Guest: 2025-11-01 10:33:10.236530743 +0000 UTC Remote: 2025-11-01 10:33:10.127841736 +0000 UTC m=+27.170765411 (delta=108.689007ms)
	I1101 10:33:10.245494  379731 fix.go:200] guest clock delta is within tolerance: 108.689007ms
	I1101 10:33:10.245503  379731 start.go:83] releasing machines lock for "pause-876158", held for 6.939220376s
	I1101 10:33:10.248890  379731 main.go:143] libmachine: domain pause-876158 has defined MAC address 52:54:00:82:de:0e in network mk-pause-876158
	I1101 10:33:10.249444  379731 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:82:de:0e", ip: ""} in network mk-pause-876158: {Iface:virbr4 ExpiryTime:2025-11-01 11:31:35 +0000 UTC Type:0 Mac:52:54:00:82:de:0e Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:pause-876158 Clientid:01:52:54:00:82:de:0e}
	I1101 10:33:10.249482  379731 main.go:143] libmachine: domain pause-876158 has defined IP address 192.168.72.174 and MAC address 52:54:00:82:de:0e in network mk-pause-876158
	I1101 10:33:10.250165  379731 ssh_runner.go:195] Run: cat /version.json
	I1101 10:33:10.250214  379731 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:33:10.253588  379731 main.go:143] libmachine: domain pause-876158 has defined MAC address 52:54:00:82:de:0e in network mk-pause-876158
	I1101 10:33:10.253638  379731 main.go:143] libmachine: domain pause-876158 has defined MAC address 52:54:00:82:de:0e in network mk-pause-876158
	I1101 10:33:10.254114  379731 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:82:de:0e", ip: ""} in network mk-pause-876158: {Iface:virbr4 ExpiryTime:2025-11-01 11:31:35 +0000 UTC Type:0 Mac:52:54:00:82:de:0e Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:pause-876158 Clientid:01:52:54:00:82:de:0e}
	I1101 10:33:10.254187  379731 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:82:de:0e", ip: ""} in network mk-pause-876158: {Iface:virbr4 ExpiryTime:2025-11-01 11:31:35 +0000 UTC Type:0 Mac:52:54:00:82:de:0e Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:pause-876158 Clientid:01:52:54:00:82:de:0e}
	I1101 10:33:10.254227  379731 main.go:143] libmachine: domain pause-876158 has defined IP address 192.168.72.174 and MAC address 52:54:00:82:de:0e in network mk-pause-876158
	I1101 10:33:10.254260  379731 main.go:143] libmachine: domain pause-876158 has defined IP address 192.168.72.174 and MAC address 52:54:00:82:de:0e in network mk-pause-876158
	I1101 10:33:10.254430  379731 sshutil.go:53] new ssh client: &{IP:192.168.72.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/pause-876158/id_rsa Username:docker}
	I1101 10:33:10.254656  379731 sshutil.go:53] new ssh client: &{IP:192.168.72.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/pause-876158/id_rsa Username:docker}
	I1101 10:33:10.367756  379731 ssh_runner.go:195] Run: systemctl --version
	I1101 10:33:10.375292  379731 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:33:10.543449  379731 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:33:10.554200  379731 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:33:10.554285  379731 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:33:10.569403  379731 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 10:33:10.569438  379731 start.go:496] detecting cgroup driver to use...
	I1101 10:33:10.569511  379731 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:33:10.599075  379731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:33:10.622025  379731 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:33:10.622100  379731 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:33:10.649748  379731 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:33:10.669706  379731 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:33:10.922615  379731 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:33:11.124086  379731 docker.go:234] disabling docker service ...
	I1101 10:33:11.124158  379731 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:33:11.154684  379731 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:33:11.173886  379731 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:33:11.396008  379731 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:33:11.582523  379731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:33:11.602859  379731 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:33:11.630498  379731 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:33:11.630574  379731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:33:11.644692  379731 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:33:11.644773  379731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:33:11.659626  379731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:33:11.674811  379731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:33:11.689517  379731 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:33:11.705419  379731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:33:11.721266  379731 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:33:11.737313  379731 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:33:11.754488  379731 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:33:11.766793  379731 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:33:11.779660  379731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:33:11.961674  379731 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:33:12.735198  379731 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:33:12.735302  379731 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:33:12.747804  379731 start.go:564] Will wait 60s for crictl version
	I1101 10:33:12.747920  379731 ssh_runner.go:195] Run: which crictl
	I1101 10:33:12.754364  379731 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 10:33:12.803258  379731 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1101 10:33:12.803374  379731 ssh_runner.go:195] Run: crio --version
	I1101 10:33:12.846839  379731 ssh_runner.go:195] Run: crio --version
	I1101 10:33:12.887181  379731 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1101 10:33:12.892017  379731 main.go:143] libmachine: domain pause-876158 has defined MAC address 52:54:00:82:de:0e in network mk-pause-876158
	I1101 10:33:12.892532  379731 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:82:de:0e", ip: ""} in network mk-pause-876158: {Iface:virbr4 ExpiryTime:2025-11-01 11:31:35 +0000 UTC Type:0 Mac:52:54:00:82:de:0e Iaid: IPaddr:192.168.72.174 Prefix:24 Hostname:pause-876158 Clientid:01:52:54:00:82:de:0e}
	I1101 10:33:12.892559  379731 main.go:143] libmachine: domain pause-876158 has defined IP address 192.168.72.174 and MAC address 52:54:00:82:de:0e in network mk-pause-876158
	I1101 10:33:12.892772  379731 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1101 10:33:12.901445  379731 kubeadm.go:884] updating cluster {Name:pause-876158 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-876158 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.174 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:33:12.901676  379731 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:33:12.901768  379731 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:33:12.960631  379731 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:33:12.960656  379731 crio.go:433] Images already preloaded, skipping extraction
	I1101 10:33:12.960715  379731 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:33:13.004816  379731 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:33:13.004849  379731 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:33:13.004861  379731 kubeadm.go:935] updating node { 192.168.72.174 8443 v1.34.1 crio true true} ...
	I1101 10:33:13.005026  379731 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-876158 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-876158 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:33:13.005130  379731 ssh_runner.go:195] Run: crio config
	I1101 10:33:13.173565  379731 cni.go:84] Creating CNI manager for ""
	I1101 10:33:13.173600  379731 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 10:33:13.173630  379731 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:33:13.173666  379731 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.174 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-876158 NodeName:pause-876158 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:33:13.173935  379731 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-876158"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.174"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.174"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:33:13.174036  379731 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:33:13.200347  379731 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:33:13.200454  379731 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:33:13.230740  379731 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1101 10:33:13.294610  379731 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:33:13.362511  379731 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1101 10:33:13.455280  379731 ssh_runner.go:195] Run: grep 192.168.72.174	control-plane.minikube.internal$ /etc/hosts
	I1101 10:33:13.479330  379731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:33:13.834424  379731 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:33:13.883394  379731 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/pause-876158 for IP: 192.168.72.174
	I1101 10:33:13.883422  379731 certs.go:195] generating shared ca certs ...
	I1101 10:33:13.883447  379731 certs.go:227] acquiring lock for ca certs: {Name:mkba0fe79f6b0ed99353299aaf34c6fbc547c6f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:33:13.883684  379731 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-344560/.minikube/ca.key
	I1101 10:33:13.883790  379731 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-344560/.minikube/proxy-client-ca.key
	I1101 10:33:13.883819  379731 certs.go:257] generating profile certs ...
	I1101 10:33:13.883976  379731 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/pause-876158/client.key
	I1101 10:33:13.884406  379731 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/pause-876158/apiserver.key.3d5a6616
	I1101 10:33:13.884623  379731 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/pause-876158/proxy-client.key
	I1101 10:33:13.884841  379731 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/348518.pem (1338 bytes)
	W1101 10:33:13.884901  379731 certs.go:480] ignoring /home/jenkins/minikube-integration/21832-344560/.minikube/certs/348518_empty.pem, impossibly tiny 0 bytes
	I1101 10:33:13.884929  379731 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:33:13.884969  379731 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/ca.pem (1082 bytes)
	I1101 10:33:13.885007  379731 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:33:13.885044  379731 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/key.pem (1679 bytes)
	I1101 10:33:13.885111  379731 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-344560/.minikube/files/etc/ssl/certs/3485182.pem (1708 bytes)
	I1101 10:33:13.886628  379731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:33:13.956952  379731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 10:33:14.029923  379731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:33:14.141374  379731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 10:33:14.221487  379731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/pause-876158/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1101 10:33:14.314983  379731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/pause-876158/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 10:33:14.417430  379731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/pause-876158/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:33:14.523435  379731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/pause-876158/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 10:33:14.651656  379731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:33:14.740130  379731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/certs/348518.pem --> /usr/share/ca-certificates/348518.pem (1338 bytes)
	I1101 10:33:14.818112  379731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/files/etc/ssl/certs/3485182.pem --> /usr/share/ca-certificates/3485182.pem (1708 bytes)
	I1101 10:33:14.886843  379731 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:33:14.928748  379731 ssh_runner.go:195] Run: openssl version
	I1101 10:33:14.951939  379731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/348518.pem && ln -fs /usr/share/ca-certificates/348518.pem /etc/ssl/certs/348518.pem"
	I1101 10:33:14.992672  379731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/348518.pem
	I1101 10:33:15.017450  379731 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:34 /usr/share/ca-certificates/348518.pem
	I1101 10:33:15.017537  379731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/348518.pem
	I1101 10:33:15.048377  379731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/348518.pem /etc/ssl/certs/51391683.0"
	I1101 10:33:15.082653  379731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3485182.pem && ln -fs /usr/share/ca-certificates/3485182.pem /etc/ssl/certs/3485182.pem"
	I1101 10:33:15.123841  379731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3485182.pem
	I1101 10:33:15.149254  379731 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:34 /usr/share/ca-certificates/3485182.pem
	I1101 10:33:15.149331  379731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3485182.pem
	I1101 10:33:15.172061  379731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3485182.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:33:15.208554  379731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:33:15.251244  379731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:33:15.263850  379731 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:27 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:33:15.263962  379731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:33:15.289353  379731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:33:15.312476  379731 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:33:15.323594  379731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 10:33:15.334591  379731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 10:33:15.345626  379731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 10:33:15.355512  379731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 10:33:15.366748  379731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 10:33:15.380622  379731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 10:33:15.390242  379731 kubeadm.go:401] StartCluster: {Name:pause-876158 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-876158 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.174 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:33:15.390395  379731 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:33:15.390483  379731 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:33:15.516794  379731 cri.go:89] found id: "2baabf87531925ad29aa58958a9a82ab31e2b4ea6190e885b3d78bb57edab584"
	I1101 10:33:15.516825  379731 cri.go:89] found id: "5664f4cdb8b0cb0cd1151361649408884d21ad30ec13a20d477e8e43b30f8b6c"
	I1101 10:33:15.516832  379731 cri.go:89] found id: "e318598b1cdc57c0bf3db59c8f6b70bece9cb0fd72f214ac315aaa82863aa6c5"
	I1101 10:33:15.516837  379731 cri.go:89] found id: "2fec3be4d90e193581410f134f96ec6279009038eee2bdbe11637a437d72eaf0"
	I1101 10:33:15.516842  379731 cri.go:89] found id: "78b1529a9d2203abffb52d97d6edf5d03eb7c066c60de924bf3e83b04e830284"
	I1101 10:33:15.516886  379731 cri.go:89] found id: "79aebb8f8da407f0f84efde4b63cf2e47e9eb50d7ced3ee123d1953b0dbd8510"
	I1101 10:33:15.516987  379731 cri.go:89] found id: "7df95d2fcb27782a86a71ed9cb0e688d258a57b0d238639891ef55a70c4cf9cb"
	I1101 10:33:15.516996  379731 cri.go:89] found id: "010a54bd51a11997ecbbd817b64e787c33ea10f72fb379e43e0006cface0fd09"
	I1101 10:33:15.517001  379731 cri.go:89] found id: "e81e38812703405b5661ab27705daff7268970c025c17942995a287176ff9476"
	I1101 10:33:15.517016  379731 cri.go:89] found id: "cd60e130ac6034481a0f78602e9ea0488e951ff2153c2437b41ca80ea6c95bff"
	I1101 10:33:15.517024  379731 cri.go:89] found id: "d61798f5bf56edc07229011405048be7ac88442bea432ad2c2c1d78a9d1d15fa"
	I1101 10:33:15.517028  379731 cri.go:89] found id: "11c5520f42ba62f0d2633efc085a58abb0f154305d7f3986d7d5609640447cff"
	I1101 10:33:15.517032  379731 cri.go:89] found id: ""
	I1101 10:33:15.517097  379731 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-876158 -n pause-876158
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-876158 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-876158 logs -n 25: (3.700819103s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────────────
──┐
	│ COMMAND │                                                                                                                    ARGS                                                                                                                     │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────────────
──┤
	│ ssh     │ -p cilium-543676 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                   │ cilium-543676             │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │                     │
	│ ssh     │ -p cilium-543676 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                              │ cilium-543676             │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │                     │
	│ ssh     │ -p cilium-543676 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                        │ cilium-543676             │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │                     │
	│ ssh     │ -p cilium-543676 sudo cri-dockerd --version                                                                                                                                                                                                 │ cilium-543676             │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │                     │
	│ ssh     │ -p cilium-543676 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                   │ cilium-543676             │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │                     │
	│ ssh     │ -p cilium-543676 sudo systemctl cat containerd --no-pager                                                                                                                                                                                   │ cilium-543676             │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │                     │
	│ ssh     │ -p cilium-543676 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                            │ cilium-543676             │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │                     │
	│ ssh     │ -p cilium-543676 sudo cat /etc/containerd/config.toml                                                                                                                                                                                       │ cilium-543676             │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │                     │
	│ ssh     │ -p cilium-543676 sudo containerd config dump                                                                                                                                                                                                │ cilium-543676             │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │                     │
	│ ssh     │ -p cilium-543676 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                         │ cilium-543676             │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │                     │
	│ ssh     │ -p cilium-543676 sudo systemctl cat crio --no-pager                                                                                                                                                                                         │ cilium-543676             │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │                     │
	│ ssh     │ -p cilium-543676 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                               │ cilium-543676             │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │                     │
	│ ssh     │ -p cilium-543676 sudo crio config                                                                                                                                                                                                           │ cilium-543676             │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │                     │
	│ delete  │ -p cilium-543676                                                                                                                                                                                                                            │ cilium-543676             │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │ 01 Nov 25 10:31 UTC │
	│ start   │ -p guest-651909 --no-kubernetes --driver=kvm2  --container-runtime=crio                                                                                                                                                                     │ guest-651909              │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │ 01 Nov 25 10:32 UTC │
	│ ssh     │ -p NoKubernetes-146388 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                     │ NoKubernetes-146388       │ jenkins │ v1.37.0 │ 01 Nov 25 10:32 UTC │                     │
	│ delete  │ -p NoKubernetes-146388                                                                                                                                                                                                                      │ NoKubernetes-146388       │ jenkins │ v1.37.0 │ 01 Nov 25 10:32 UTC │ 01 Nov 25 10:32 UTC │
	│ start   │ -p cert-expiration-383589 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio                                                                                                                                        │ cert-expiration-383589    │ jenkins │ v1.37.0 │ 01 Nov 25 10:32 UTC │ 01 Nov 25 10:33 UTC │
	│ start   │ -p pause-876158 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                                              │ pause-876158              │ jenkins │ v1.37.0 │ 01 Nov 25 10:32 UTC │ 01 Nov 25 10:33 UTC │
	│ start   │ -p force-systemd-flag-706270 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                                                                                   │ force-systemd-flag-706270 │ jenkins │ v1.37.0 │ 01 Nov 25 10:32 UTC │ 01 Nov 25 10:33 UTC │
	│ delete  │ -p force-systemd-env-112765                                                                                                                                                                                                                 │ force-systemd-env-112765  │ jenkins │ v1.37.0 │ 01 Nov 25 10:32 UTC │ 01 Nov 25 10:32 UTC │
	│ start   │ -p cert-options-842807 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio                     │ cert-options-842807       │ jenkins │ v1.37.0 │ 01 Nov 25 10:32 UTC │                     │
	│ ssh     │ force-systemd-flag-706270 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                        │ force-systemd-flag-706270 │ jenkins │ v1.37.0 │ 01 Nov 25 10:33 UTC │ 01 Nov 25 10:33 UTC │
	│ delete  │ -p force-systemd-flag-706270                                                                                                                                                                                                                │ force-systemd-flag-706270 │ jenkins │ v1.37.0 │ 01 Nov 25 10:33 UTC │ 01 Nov 25 10:33 UTC │
	│ start   │ -p old-k8s-version-152855 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-152855    │ jenkins │ v1.37.0 │ 01 Nov 25 10:33 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────────────
──┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:33:59
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:33:59.566185  380597 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:33:59.566484  380597 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:33:59.566495  380597 out.go:374] Setting ErrFile to fd 2...
	I1101 10:33:59.566499  380597 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:33:59.566713  380597 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-344560/.minikube/bin
	I1101 10:33:59.567394  380597 out.go:368] Setting JSON to false
	I1101 10:33:59.568719  380597 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8188,"bootTime":1761985052,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:33:59.568791  380597 start.go:143] virtualization: kvm guest
	I1101 10:33:59.571101  380597 out.go:179] * [old-k8s-version-152855] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:33:59.574133  380597 notify.go:221] Checking for updates...
	I1101 10:33:59.574149  380597 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 10:33:59.575954  380597 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:33:59.577415  380597 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-344560/kubeconfig
	I1101 10:33:59.578673  380597 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-344560/.minikube
	I1101 10:33:59.580122  380597 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:33:59.581241  380597 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:33:59.583161  380597 config.go:182] Loaded profile config "cert-expiration-383589": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:33:59.583339  380597 config.go:182] Loaded profile config "cert-options-842807": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:33:59.583477  380597 config.go:182] Loaded profile config "guest-651909": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1101 10:33:59.583669  380597 config.go:182] Loaded profile config "pause-876158": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:33:59.583803  380597 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:33:59.632578  380597 out.go:179] * Using the kvm2 driver based on user configuration
	I1101 10:33:59.633800  380597 start.go:309] selected driver: kvm2
	I1101 10:33:59.633821  380597 start.go:930] validating driver "kvm2" against <nil>
	I1101 10:33:59.633848  380597 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:33:59.634629  380597 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 10:33:59.634956  380597 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:33:59.635004  380597 cni.go:84] Creating CNI manager for ""
	I1101 10:33:59.635077  380597 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 10:33:59.635089  380597 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1101 10:33:59.635156  380597 start.go:353] cluster config:
	{Name:old-k8s-version-152855 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-152855 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHA
gentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:33:59.635294  380597 iso.go:125] acquiring lock: {Name:mkc74493fbbc2007c645c4ed6349cf76e7fb2185 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:33:59.636786  380597 out.go:179] * Starting "old-k8s-version-152855" primary control-plane node in "old-k8s-version-152855" cluster
	I1101 10:33:59.637746  380597 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 10:33:59.637785  380597 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21832-344560/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1101 10:33:59.637795  380597 cache.go:59] Caching tarball of preloaded images
	I1101 10:33:59.637893  380597 preload.go:233] Found /home/jenkins/minikube-integration/21832-344560/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 10:33:59.637905  380597 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1101 10:33:59.637995  380597 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/old-k8s-version-152855/config.json ...
	I1101 10:33:59.638015  380597 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/old-k8s-version-152855/config.json: {Name:mk878f27e4eb1aac282516f51b4962ddf5db22b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:33:59.638165  380597 start.go:360] acquireMachinesLock for old-k8s-version-152855: {Name:mkd221a68334bc82c567a9a06c8563af1e1c38bd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 10:33:59.638205  380597 start.go:364] duration metric: took 23.628µs to acquireMachinesLock for "old-k8s-version-152855"
	I1101 10:33:59.638226  380597 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-152855 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-152855 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:33:59.638279  380597 start.go:125] createHost starting for "" (driver="kvm2")
	I1101 10:33:58.934335  379731 pod_ready.go:94] pod "etcd-pause-876158" is "Ready"
	I1101 10:33:58.934368  379731 pod_ready.go:86] duration metric: took 5.009339561s for pod "etcd-pause-876158" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:33:58.938279  379731 pod_ready.go:83] waiting for pod "kube-apiserver-pause-876158" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:33:58.945645  379731 pod_ready.go:94] pod "kube-apiserver-pause-876158" is "Ready"
	I1101 10:33:58.945680  379731 pod_ready.go:86] duration metric: took 7.374715ms for pod "kube-apiserver-pause-876158" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:33:58.949241  379731 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-876158" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:33:58.956851  379731 pod_ready.go:94] pod "kube-controller-manager-pause-876158" is "Ready"
	I1101 10:33:58.956918  379731 pod_ready.go:86] duration metric: took 7.621948ms for pod "kube-controller-manager-pause-876158" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:33:58.961442  379731 pod_ready.go:83] waiting for pod "kube-proxy-4fktf" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:33:59.131769  379731 pod_ready.go:94] pod "kube-proxy-4fktf" is "Ready"
	I1101 10:33:59.131799  379731 pod_ready.go:86] duration metric: took 170.327767ms for pod "kube-proxy-4fktf" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:33:59.331320  379731 pod_ready.go:83] waiting for pod "kube-scheduler-pause-876158" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:33:59.731352  379731 pod_ready.go:94] pod "kube-scheduler-pause-876158" is "Ready"
	I1101 10:33:59.731384  379731 pod_ready.go:86] duration metric: took 400.029736ms for pod "kube-scheduler-pause-876158" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:33:59.731401  379731 pod_ready.go:40] duration metric: took 12.827018716s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:33:59.801721  379731 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 10:33:59.804205  379731 out.go:179] * Done! kubectl is now configured to use "pause-876158" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 10:34:00 pause-876158 crio[2790]: time="2025-11-01 10:34:00.570186590Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:15c35f8a35a17a5449614d0ae7360cbcca44145318b3b5a633a16fe226079ae3,Metadata:&PodSandboxMetadata{Name:coredns-66bc5c9577-jt729,Uid:92e8f148-6a9d-427d-a523-68a579131ec6,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1761993193482427964,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-66bc5c9577-jt729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e8f148-6a9d-427d-a523-68a579131ec6,k8s-app: kube-dns,pod-template-hash: 66bc5c9577,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T10:32:05.764822318Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:630273ed62424ade2cfd75fcb2480e14dfbbfc9182b16a8ecf7d053f3b849bc6,Metadata:&PodSandboxMetadata{Name:etcd-pause-876158,Uid:803348f0ad4648836b4395fbe9f96117,Namespace:kube-system,Attempt:1,
},State:SANDBOX_READY,CreatedAt:1761993193257943482,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 803348f0ad4648836b4395fbe9f96117,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.174:2379,kubernetes.io/config.hash: 803348f0ad4648836b4395fbe9f96117,kubernetes.io/config.seen: 2025-11-01T10:32:00.508565258Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3daf63d94c95af4b6168e90e8e3b1e9a2a31691b3e526f5ef65f215b6c1d9c35,Metadata:&PodSandboxMetadata{Name:kube-proxy-4fktf,Uid:a3154768-c6f0-4b8f-9a95-c6f6fb16dc98,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1761993193245966325,Labels:map[string]string{controller-revision-hash: 66486579fc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-4fktf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
a3154768-c6f0-4b8f-9a95-c6f6fb16dc98,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2025-11-01T10:32:05.186038782Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4674f9583203380588e79c6f84e891e2eb0eb002a292d75a84206888895d67a5,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-876158,Uid:949ecae0312f4c3b69405f37e19e08f8,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1761993193153020775,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949ecae0312f4c3b69405f37e19e08f8,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.174:8443,kubernetes.io/config.hash: 949ecae0312f4c3b69405f37e19e08f8,kubernetes.io/config.seen: 2025-11-01T10:32:00.508569257Z,kubernetes.io/config.source: file,},RuntimeH
andler:,},&PodSandbox{Id:cfe6db88f50c404a911e999b66ef5f1daf00a42a24c744481c0560f0e1975d3e,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-876158,Uid:f25a81ea579abde52be137a9a148994e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1761993193114556942,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f25a81ea579abde52be137a9a148994e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f25a81ea579abde52be137a9a148994e,kubernetes.io/config.seen: 2025-11-01T10:32:00.508570358Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c17725687394db77db08b7e79dacb9c0194b4fb9b28e5c62220cb0a571ce043e,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-876158,Uid:973c63e287059513809e8ca2a9137cd0,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1761993193110538066,Lab
els:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973c63e287059513809e8ca2a9137cd0,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 973c63e287059513809e8ca2a9137cd0,kubernetes.io/config.seen: 2025-11-01T10:32:00.508572281Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=e6e32e45-d240-4636-b786-5b009edd8843 name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 01 10:34:00 pause-876158 crio[2790]: time="2025-11-01 10:34:00.571315884Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2fef1afc-968e-49f9-9bcb-51c99fa44776 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:34:00 pause-876158 crio[2790]: time="2025-11-01 10:34:00.571402117Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2fef1afc-968e-49f9-9bcb-51c99fa44776 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:34:00 pause-876158 crio[2790]: time="2025-11-01 10:34:00.571702948Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:265c57d85cf516e7f15b9f4718451a80368f2803140064598a5700c7c6183070,PodSandboxId:3daf63d94c95af4b6168e90e8e3b1e9a2a31691b3e526f5ef65f215b6c1d9c35,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761993224833858308,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4fktf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3154768-c6f0-4b8f-9a95-c6f6fb16dc98,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c48282534e030364f2f8c5f1cded5e59cd875050e613ba6d0bf233bb1692286c,PodSandboxId:15c35f8a35a17a5449614d0ae7360cbcca44145318b3b5a633a16fe226079ae3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761993224825434388,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-jt729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e8f148-6a9d-427d-a523-68a579131ec6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f171ed0fbd1f5bf2ae64c6e5b4951bdf2e68b0fc7c1606c22c3c9e7533f915b0,PodSandboxId:cfe6db88f50c404a911e999b66ef5f1daf00a42a24c744481c0560f0e1975d3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761993219073231006,Labels:map[string]string{io.kubernetes.
container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f25a81ea579abde52be137a9a148994e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:688c471e89968dc1bdbbf0813fa854bc949909909d1c810fd980416e633315dc,PodSandboxId:4674f9583203380588e79c6f84e891e2eb0eb002a292d75a84206888895d67a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108
c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761993219035168734,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949ecae0312f4c3b69405f37e19e08f8,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c71a895a06e893a8df5e00f8b827024417d6f59c1ce8d00280754ceff495603,PodSandboxId:c17725687394db77db08b7e79dacb9c0194b4fb9b28e5c62220cb0a571ce043e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761993219029027280,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973c63e287059513809e8ca2a9137cd0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:624bd4fc21c8a04dd1c550689123fdbe90eedd800087e615514693c98342dfc3,PodSandboxId:630273ed62424ade2cfd75fcb2480e14dfbbfc9182b16a8ecf7d053f3b849bc6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Imag
e:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761993218993382217,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 803348f0ad4648836b4395fbe9f96117,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2fef1afc-968e-49f9-9bcb-51c99fa44776 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:34:00 pause-876158 crio[2790]: time="2025-11-01 10:34:00.603236172Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=07754412-1be3-4c92-9d5c-f60b1958ded3 name=/runtime.v1.RuntimeService/Version
	Nov 01 10:34:00 pause-876158 crio[2790]: time="2025-11-01 10:34:00.603856982Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=07754412-1be3-4c92-9d5c-f60b1958ded3 name=/runtime.v1.RuntimeService/Version
	Nov 01 10:34:00 pause-876158 crio[2790]: time="2025-11-01 10:34:00.606406416Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=37cc1960-924e-4322-a773-16a6d936b771 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:34:00 pause-876158 crio[2790]: time="2025-11-01 10:34:00.607387011Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761993240607352306,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=37cc1960-924e-4322-a773-16a6d936b771 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:34:00 pause-876158 crio[2790]: time="2025-11-01 10:34:00.608350345Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c71f7121-0ec9-4c0d-8a3f-a74afc6400d0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:34:00 pause-876158 crio[2790]: time="2025-11-01 10:34:00.608403806Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c71f7121-0ec9-4c0d-8a3f-a74afc6400d0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:34:00 pause-876158 crio[2790]: time="2025-11-01 10:34:00.608802530Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:265c57d85cf516e7f15b9f4718451a80368f2803140064598a5700c7c6183070,PodSandboxId:3daf63d94c95af4b6168e90e8e3b1e9a2a31691b3e526f5ef65f215b6c1d9c35,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761993224833858308,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4fktf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3154768-c6f0-4b8f-9a95-c6f6fb16dc98,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c48282534e030364f2f8c5f1cded5e59cd875050e613ba6d0bf233bb1692286c,PodSandboxId:15c35f8a35a17a5449614d0ae7360cbcca44145318b3b5a633a16fe226079ae3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761993224825434388,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-jt729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e8f148-6a9d-427d-a523-68a579131ec6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f171ed0fbd1f5bf2ae64c6e5b4951bdf2e68b0fc7c1606c22c3c9e7533f915b0,PodSandboxId:cfe6db88f50c404a911e999b66ef5f1daf00a42a24c744481c0560f0e1975d3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761993219073231006,Labels:map[string]string{io.kubernetes.
container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f25a81ea579abde52be137a9a148994e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:688c471e89968dc1bdbbf0813fa854bc949909909d1c810fd980416e633315dc,PodSandboxId:4674f9583203380588e79c6f84e891e2eb0eb002a292d75a84206888895d67a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108
c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761993219035168734,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949ecae0312f4c3b69405f37e19e08f8,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c71a895a06e893a8df5e00f8b827024417d6f59c1ce8d00280754ceff495603,PodSandboxId:c17725687394db77db08b7e79dacb9c0194b4fb9b28e5c62220cb0a571ce043e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761993219029027280,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973c63e287059513809e8ca2a9137cd0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:624bd4fc21c8a04dd1c550689123fdbe90eedd800087e615514693c98342dfc3,PodSandboxId:630273ed62424ade2cfd75fcb2480e14dfbbfc9182b16a8ecf7d053f3b849bc6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Imag
e:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761993218993382217,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 803348f0ad4648836b4395fbe9f96117,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2baabf87531925ad29aa58958a9a82ab31e2b4ea6190e885b3d78bb57edab584,PodSandboxId:15c35f8a35a17a5449614d0ae7360cbcca44145318b3b5
a633a16fe226079ae3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761993195029120676,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-jt729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e8f148-6a9d-427d-a523-68a579131ec6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e318598b1cdc57c0bf3db59c8f6b70bece9cb0fd72f214ac315aaa82863aa6c5,PodSandboxId:3daf63d94c95af4b6168e90e8e3b1e9a2a31691b3e526f5ef65f215b6c1d9c35,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761993193920460830,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4fktf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3154768-c6f0-4b8f-9a95-c6f6fb16dc98,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5664f4cdb8b0cb0cd1151361649408884d21ad30ec13a20d477e8e43b30f8b6c,PodSandboxId:cfe6db88f50c404a911e999b66ef5f1daf00a42a24c744481c0560f0e1975d3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761993193998332010,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f25a81ea579abde52be137a9a148994e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"
name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fec3be4d90e193581410f134f96ec6279009038eee2bdbe11637a437d72eaf0,PodSandboxId:4674f9583203380588e79c6f84e891e2eb0eb002a292d75a84206888895d67a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761993193861011137,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949ecae0312f4c3b69405f37e19e08f8,},
Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78b1529a9d2203abffb52d97d6edf5d03eb7c066c60de924bf3e83b04e830284,PodSandboxId:630273ed62424ade2cfd75fcb2480e14dfbbfc9182b16a8ecf7d053f3b849bc6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761993193835506367,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-876158,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 803348f0ad4648836b4395fbe9f96117,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79aebb8f8da407f0f84efde4b63cf2e47e9eb50d7ced3ee123d1953b0dbd8510,PodSandboxId:c17725687394db77db08b7e79dacb9c0194b4fb9b28e5c62220cb0a571ce043e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761993193781180020,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973c63e287059513809e8ca2a9137cd0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c71f7121-0ec9-4c0d-8a3f-a74afc6400d0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:34:00 pause-876158 crio[2790]: time="2025-11-01 10:34:00.668700845Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=545d99bc-7828-40a2-8b14-799c712a20d4 name=/runtime.v1.RuntimeService/Version
	Nov 01 10:34:00 pause-876158 crio[2790]: time="2025-11-01 10:34:00.668938010Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=545d99bc-7828-40a2-8b14-799c712a20d4 name=/runtime.v1.RuntimeService/Version
	Nov 01 10:34:00 pause-876158 crio[2790]: time="2025-11-01 10:34:00.670839302Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4a5f347d-eba9-4b99-88ba-794284f922d7 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:34:00 pause-876158 crio[2790]: time="2025-11-01 10:34:00.671499287Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761993240671460598,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4a5f347d-eba9-4b99-88ba-794284f922d7 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:34:00 pause-876158 crio[2790]: time="2025-11-01 10:34:00.672184260Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4401f8b9-f08d-44f5-b349-5f1ef1c02b4c name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:34:00 pause-876158 crio[2790]: time="2025-11-01 10:34:00.672419758Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4401f8b9-f08d-44f5-b349-5f1ef1c02b4c name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:34:00 pause-876158 crio[2790]: time="2025-11-01 10:34:00.673089277Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:265c57d85cf516e7f15b9f4718451a80368f2803140064598a5700c7c6183070,PodSandboxId:3daf63d94c95af4b6168e90e8e3b1e9a2a31691b3e526f5ef65f215b6c1d9c35,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761993224833858308,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4fktf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3154768-c6f0-4b8f-9a95-c6f6fb16dc98,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c48282534e030364f2f8c5f1cded5e59cd875050e613ba6d0bf233bb1692286c,PodSandboxId:15c35f8a35a17a5449614d0ae7360cbcca44145318b3b5a633a16fe226079ae3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761993224825434388,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-jt729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e8f148-6a9d-427d-a523-68a579131ec6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f171ed0fbd1f5bf2ae64c6e5b4951bdf2e68b0fc7c1606c22c3c9e7533f915b0,PodSandboxId:cfe6db88f50c404a911e999b66ef5f1daf00a42a24c744481c0560f0e1975d3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761993219073231006,Labels:map[string]string{io.kubernetes.
container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f25a81ea579abde52be137a9a148994e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:688c471e89968dc1bdbbf0813fa854bc949909909d1c810fd980416e633315dc,PodSandboxId:4674f9583203380588e79c6f84e891e2eb0eb002a292d75a84206888895d67a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108
c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761993219035168734,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949ecae0312f4c3b69405f37e19e08f8,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c71a895a06e893a8df5e00f8b827024417d6f59c1ce8d00280754ceff495603,PodSandboxId:c17725687394db77db08b7e79dacb9c0194b4fb9b28e5c62220cb0a571ce043e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761993219029027280,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973c63e287059513809e8ca2a9137cd0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:624bd4fc21c8a04dd1c550689123fdbe90eedd800087e615514693c98342dfc3,PodSandboxId:630273ed62424ade2cfd75fcb2480e14dfbbfc9182b16a8ecf7d053f3b849bc6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Imag
e:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761993218993382217,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 803348f0ad4648836b4395fbe9f96117,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2baabf87531925ad29aa58958a9a82ab31e2b4ea6190e885b3d78bb57edab584,PodSandboxId:15c35f8a35a17a5449614d0ae7360cbcca44145318b3b5
a633a16fe226079ae3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761993195029120676,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-jt729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e8f148-6a9d-427d-a523-68a579131ec6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e318598b1cdc57c0bf3db59c8f6b70bece9cb0fd72f214ac315aaa82863aa6c5,PodSandboxId:3daf63d94c95af4b6168e90e8e3b1e9a2a31691b3e526f5ef65f215b6c1d9c35,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761993193920460830,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4fktf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3154768-c6f0-4b8f-9a95-c6f6fb16dc98,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5664f4cdb8b0cb0cd1151361649408884d21ad30ec13a20d477e8e43b30f8b6c,PodSandboxId:cfe6db88f50c404a911e999b66ef5f1daf00a42a24c744481c0560f0e1975d3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761993193998332010,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f25a81ea579abde52be137a9a148994e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"
name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fec3be4d90e193581410f134f96ec6279009038eee2bdbe11637a437d72eaf0,PodSandboxId:4674f9583203380588e79c6f84e891e2eb0eb002a292d75a84206888895d67a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761993193861011137,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949ecae0312f4c3b69405f37e19e08f8,},
Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78b1529a9d2203abffb52d97d6edf5d03eb7c066c60de924bf3e83b04e830284,PodSandboxId:630273ed62424ade2cfd75fcb2480e14dfbbfc9182b16a8ecf7d053f3b849bc6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761993193835506367,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-876158,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 803348f0ad4648836b4395fbe9f96117,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79aebb8f8da407f0f84efde4b63cf2e47e9eb50d7ced3ee123d1953b0dbd8510,PodSandboxId:c17725687394db77db08b7e79dacb9c0194b4fb9b28e5c62220cb0a571ce043e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761993193781180020,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973c63e287059513809e8ca2a9137cd0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4401f8b9-f08d-44f5-b349-5f1ef1c02b4c name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:34:00 pause-876158 crio[2790]: time="2025-11-01 10:34:00.730175724Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=358381bb-1aaf-4a85-ad44-ea0735e556ae name=/runtime.v1.RuntimeService/Version
	Nov 01 10:34:00 pause-876158 crio[2790]: time="2025-11-01 10:34:00.730280696Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=358381bb-1aaf-4a85-ad44-ea0735e556ae name=/runtime.v1.RuntimeService/Version
	Nov 01 10:34:00 pause-876158 crio[2790]: time="2025-11-01 10:34:00.732307981Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a26a2c28-cf93-47cb-b8ab-d234dd495b6a name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:34:00 pause-876158 crio[2790]: time="2025-11-01 10:34:00.734189209Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761993240734135842,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a26a2c28-cf93-47cb-b8ab-d234dd495b6a name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:34:00 pause-876158 crio[2790]: time="2025-11-01 10:34:00.737735780Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=22f485cc-43ad-4efd-95f3-0e4a2f4f9187 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:34:00 pause-876158 crio[2790]: time="2025-11-01 10:34:00.737828824Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=22f485cc-43ad-4efd-95f3-0e4a2f4f9187 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:34:00 pause-876158 crio[2790]: time="2025-11-01 10:34:00.738129649Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:265c57d85cf516e7f15b9f4718451a80368f2803140064598a5700c7c6183070,PodSandboxId:3daf63d94c95af4b6168e90e8e3b1e9a2a31691b3e526f5ef65f215b6c1d9c35,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761993224833858308,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4fktf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3154768-c6f0-4b8f-9a95-c6f6fb16dc98,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c48282534e030364f2f8c5f1cded5e59cd875050e613ba6d0bf233bb1692286c,PodSandboxId:15c35f8a35a17a5449614d0ae7360cbcca44145318b3b5a633a16fe226079ae3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761993224825434388,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-jt729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e8f148-6a9d-427d-a523-68a579131ec6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f171ed0fbd1f5bf2ae64c6e5b4951bdf2e68b0fc7c1606c22c3c9e7533f915b0,PodSandboxId:cfe6db88f50c404a911e999b66ef5f1daf00a42a24c744481c0560f0e1975d3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761993219073231006,Labels:map[string]string{io.kubernetes.
container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f25a81ea579abde52be137a9a148994e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:688c471e89968dc1bdbbf0813fa854bc949909909d1c810fd980416e633315dc,PodSandboxId:4674f9583203380588e79c6f84e891e2eb0eb002a292d75a84206888895d67a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108
c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761993219035168734,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949ecae0312f4c3b69405f37e19e08f8,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c71a895a06e893a8df5e00f8b827024417d6f59c1ce8d00280754ceff495603,PodSandboxId:c17725687394db77db08b7e79dacb9c0194b4fb9b28e5c62220cb0a571ce043e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761993219029027280,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973c63e287059513809e8ca2a9137cd0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:624bd4fc21c8a04dd1c550689123fdbe90eedd800087e615514693c98342dfc3,PodSandboxId:630273ed62424ade2cfd75fcb2480e14dfbbfc9182b16a8ecf7d053f3b849bc6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Imag
e:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761993218993382217,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 803348f0ad4648836b4395fbe9f96117,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2baabf87531925ad29aa58958a9a82ab31e2b4ea6190e885b3d78bb57edab584,PodSandboxId:15c35f8a35a17a5449614d0ae7360cbcca44145318b3b5
a633a16fe226079ae3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761993195029120676,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-jt729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e8f148-6a9d-427d-a523-68a579131ec6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e318598b1cdc57c0bf3db59c8f6b70bece9cb0fd72f214ac315aaa82863aa6c5,PodSandboxId:3daf63d94c95af4b6168e90e8e3b1e9a2a31691b3e526f5ef65f215b6c1d9c35,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761993193920460830,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4fktf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3154768-c6f0-4b8f-9a95-c6f6fb16dc98,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5664f4cdb8b0cb0cd1151361649408884d21ad30ec13a20d477e8e43b30f8b6c,PodSandboxId:cfe6db88f50c404a911e999b66ef5f1daf00a42a24c744481c0560f0e1975d3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761993193998332010,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f25a81ea579abde52be137a9a148994e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"
name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fec3be4d90e193581410f134f96ec6279009038eee2bdbe11637a437d72eaf0,PodSandboxId:4674f9583203380588e79c6f84e891e2eb0eb002a292d75a84206888895d67a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761993193861011137,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949ecae0312f4c3b69405f37e19e08f8,},
Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78b1529a9d2203abffb52d97d6edf5d03eb7c066c60de924bf3e83b04e830284,PodSandboxId:630273ed62424ade2cfd75fcb2480e14dfbbfc9182b16a8ecf7d053f3b849bc6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761993193835506367,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-876158,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 803348f0ad4648836b4395fbe9f96117,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79aebb8f8da407f0f84efde4b63cf2e47e9eb50d7ced3ee123d1953b0dbd8510,PodSandboxId:c17725687394db77db08b7e79dacb9c0194b4fb9b28e5c62220cb0a571ce043e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761993193781180020,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973c63e287059513809e8ca2a9137cd0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=22f485cc-43ad-4efd-95f3-0e4a2f4f9187 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	265c57d85cf51       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   15 seconds ago      Running             kube-proxy                2                   3daf63d94c95a       kube-proxy-4fktf
	c48282534e030       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   16 seconds ago      Running             coredns                   2                   15c35f8a35a17       coredns-66bc5c9577-jt729
	f171ed0fbd1f5       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   21 seconds ago      Running             kube-controller-manager   2                   cfe6db88f50c4       kube-controller-manager-pause-876158
	688c471e89968       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   21 seconds ago      Running             kube-apiserver            2                   4674f95832033       kube-apiserver-pause-876158
	5c71a895a06e8       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   21 seconds ago      Running             kube-scheduler            2                   c17725687394d       kube-scheduler-pause-876158
	624bd4fc21c8a       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   21 seconds ago      Running             etcd                      2                   630273ed62424       etcd-pause-876158
	2baabf8753192       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   45 seconds ago      Exited              coredns                   1                   15c35f8a35a17       coredns-66bc5c9577-jt729
	5664f4cdb8b0c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   46 seconds ago      Exited              kube-controller-manager   1                   cfe6db88f50c4       kube-controller-manager-pause-876158
	e318598b1cdc5       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   46 seconds ago      Exited              kube-proxy                1                   3daf63d94c95a       kube-proxy-4fktf
	2fec3be4d90e1       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   46 seconds ago      Exited              kube-apiserver            1                   4674f95832033       kube-apiserver-pause-876158
	78b1529a9d220       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   46 seconds ago      Exited              etcd                      1                   630273ed62424       etcd-pause-876158
	79aebb8f8da40       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   47 seconds ago      Exited              kube-scheduler            1                   c17725687394d       kube-scheduler-pause-876158
	
	
	==> coredns [2baabf87531925ad29aa58958a9a82ab31e2b4ea6190e885b3d78bb57edab584] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:53276 - 56786 "HINFO IN 7355434863311998644.4232500489853051955. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.081693452s
	
	
	==> coredns [c48282534e030364f2f8c5f1cded5e59cd875050e613ba6d0bf233bb1692286c] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52701 - 33642 "HINFO IN 1384503627503618802.3154399698577371114. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.072873502s
	
	
	==> describe nodes <==
	Name:               pause-876158
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-876158
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d
	                    minikube.k8s.io/name=pause-876158
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_32_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:31:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-876158
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:33:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:33:43 +0000   Sat, 01 Nov 2025 10:31:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:33:43 +0000   Sat, 01 Nov 2025 10:31:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:33:43 +0000   Sat, 01 Nov 2025 10:31:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:33:43 +0000   Sat, 01 Nov 2025 10:32:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.174
	  Hostname:    pause-876158
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 654173f0fde3400092ffb627889717a5
	  System UUID:                654173f0-fde3-4000-92ff-b627889717a5
	  Boot ID:                    e460a208-5d5f-4be6-a9cf-9a4098dfa869
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-jt729                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     116s
	  kube-system                 etcd-pause-876158                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         2m3s
	  kube-system                 kube-apiserver-pause-876158             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-controller-manager-pause-876158    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-proxy-4fktf                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-scheduler-pause-876158             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 114s               kube-proxy       
	  Normal  Starting                 15s                kube-proxy       
	  Normal  Starting                 43s                kube-proxy       
	  Normal  Starting                 2m1s               kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m1s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m1s               kubelet          Node pause-876158 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m1s               kubelet          Node pause-876158 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m1s               kubelet          Node pause-876158 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m                 kubelet          Node pause-876158 status is now: NodeReady
	  Normal  RegisteredNode           117s               node-controller  Node pause-876158 event: Registered Node pause-876158 in Controller
	  Normal  RegisteredNode           40s                node-controller  Node pause-876158 event: Registered Node pause-876158 in Controller
	  Normal  Starting                 23s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)  kubelet          Node pause-876158 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)  kubelet          Node pause-876158 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)  kubelet          Node pause-876158 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14s                node-controller  Node pause-876158 event: Registered Node pause-876158 in Controller
	
	
	==> dmesg <==
	[Nov 1 10:31] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001387] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.005919] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.505794] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000026] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.090191] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.137646] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.100240] kauditd_printk_skb: 18 callbacks suppressed
	[Nov 1 10:32] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.005376] kauditd_printk_skb: 18 callbacks suppressed
	[ +10.779521] kauditd_printk_skb: 219 callbacks suppressed
	[ +23.896833] kauditd_printk_skb: 38 callbacks suppressed
	[Nov 1 10:33] kauditd_printk_skb: 297 callbacks suppressed
	[  +4.026798] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.138217] kauditd_printk_skb: 15 callbacks suppressed
	[  +6.579245] kauditd_printk_skb: 81 callbacks suppressed
	[  +6.928923] kauditd_printk_skb: 32 callbacks suppressed
	
	
	==> etcd [624bd4fc21c8a04dd1c550689123fdbe90eedd800087e615514693c98342dfc3] <==
	{"level":"warn","ts":"2025-11-01T10:33:42.682343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:33:42.710187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:33:42.720237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:33:42.746278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:33:42.753197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:33:42.770973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:33:42.805388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:33:42.868912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:33:45.716108Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"171.919017ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" limit:1 ","response":"range_response_count:1 size:368"}
	{"level":"info","ts":"2025-11-01T10:33:45.716200Z","caller":"traceutil/trace.go:172","msg":"trace[576577424] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:548; }","duration":"172.076412ms","start":"2025-11-01T10:33:45.544106Z","end":"2025-11-01T10:33:45.716183Z","steps":["trace[576577424] 'agreement among raft nodes before linearized reading'  (duration: 61.592411ms)","trace[576577424] 'range keys from in-memory index tree'  (duration: 110.252488ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T10:33:45.718411Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.742691ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6141109162183821431 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/pause-876158.1873db80c7f4709d\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-876158.1873db80c7f4709d\" value_size:462 lease:6141109162183821428 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-01T10:33:45.718734Z","caller":"traceutil/trace.go:172","msg":"trace[1439771260] transaction","detail":"{read_only:false; response_revision:549; number_of_response:1; }","duration":"181.372818ms","start":"2025-11-01T10:33:45.537344Z","end":"2025-11-01T10:33:45.718717Z","steps":["trace[1439771260] 'process raft request'  (duration: 68.393524ms)","trace[1439771260] 'compare'  (duration: 110.000166ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T10:33:45.921523Z","caller":"traceutil/trace.go:172","msg":"trace[312665755] linearizableReadLoop","detail":"{readStateIndex:591; appliedIndex:591; }","duration":"190.025505ms","start":"2025-11-01T10:33:45.731482Z","end":"2025-11-01T10:33:45.921507Z","steps":["trace[312665755] 'read index received'  (duration: 190.020736ms)","trace[312665755] 'applied index is now lower than readState.Index'  (duration: 4.119µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T10:33:45.973324Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"241.772675ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/node-controller\" limit:1 ","response":"range_response_count:1 size:195"}
	{"level":"info","ts":"2025-11-01T10:33:45.973418Z","caller":"traceutil/trace.go:172","msg":"trace[640122231] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/node-controller; range_end:; response_count:1; response_revision:549; }","duration":"241.922675ms","start":"2025-11-01T10:33:45.731478Z","end":"2025-11-01T10:33:45.973401Z","steps":["trace[640122231] 'agreement among raft nodes before linearized reading'  (duration: 190.163526ms)","trace[640122231] 'range keys from in-memory index tree'  (duration: 51.521809ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T10:33:45.973683Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"194.635669ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T10:33:45.973769Z","caller":"traceutil/trace.go:172","msg":"trace[1396806250] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:550; }","duration":"194.720967ms","start":"2025-11-01T10:33:45.779027Z","end":"2025-11-01T10:33:45.973748Z","steps":["trace[1396806250] 'agreement among raft nodes before linearized reading'  (duration: 194.523999ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:33:45.973883Z","caller":"traceutil/trace.go:172","msg":"trace[1512195498] transaction","detail":"{read_only:false; response_revision:550; number_of_response:1; }","duration":"244.472618ms","start":"2025-11-01T10:33:45.729382Z","end":"2025-11-01T10:33:45.973854Z","steps":["trace[1512195498] 'process raft request'  (duration: 192.320924ms)","trace[1512195498] 'compare'  (duration: 51.34993ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T10:33:45.974062Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"177.861616ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-jt729\" limit:1 ","response":"range_response_count:1 size:5731"}
	{"level":"info","ts":"2025-11-01T10:33:45.974092Z","caller":"traceutil/trace.go:172","msg":"trace[1271903266] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-jt729; range_end:; response_count:1; response_revision:550; }","duration":"177.899961ms","start":"2025-11-01T10:33:45.796184Z","end":"2025-11-01T10:33:45.974084Z","steps":["trace[1271903266] 'agreement among raft nodes before linearized reading'  (duration: 177.765699ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:33:46.127007Z","caller":"traceutil/trace.go:172","msg":"trace[1955359810] linearizableReadLoop","detail":"{readStateIndex:592; appliedIndex:592; }","duration":"126.987794ms","start":"2025-11-01T10:33:45.999972Z","end":"2025-11-01T10:33:46.126960Z","steps":["trace[1955359810] 'read index received'  (duration: 126.984094ms)","trace[1955359810] 'applied index is now lower than readState.Index'  (duration: 3.283µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T10:33:46.127977Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"127.98992ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T10:33:46.128015Z","caller":"traceutil/trace.go:172","msg":"trace[1801600657] range","detail":"{range_begin:/registry/minions; range_end:; response_count:0; response_revision:550; }","duration":"128.041499ms","start":"2025-11-01T10:33:45.999965Z","end":"2025-11-01T10:33:46.128007Z","steps":["trace[1801600657] 'agreement among raft nodes before linearized reading'  (duration: 127.141275ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:33:46.128286Z","caller":"traceutil/trace.go:172","msg":"trace[890136585] transaction","detail":"{read_only:false; response_revision:552; number_of_response:1; }","duration":"139.456187ms","start":"2025-11-01T10:33:45.988821Z","end":"2025-11-01T10:33:46.128277Z","steps":["trace[890136585] 'process raft request'  (duration: 139.265035ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:33:46.128411Z","caller":"traceutil/trace.go:172","msg":"trace[434859032] transaction","detail":"{read_only:false; response_revision:551; number_of_response:1; }","duration":"140.932575ms","start":"2025-11-01T10:33:45.987469Z","end":"2025-11-01T10:33:46.128401Z","steps":["trace[434859032] 'process raft request'  (duration: 139.479553ms)"],"step_count":1}
	
	
	==> etcd [78b1529a9d2203abffb52d97d6edf5d03eb7c066c60de924bf3e83b04e830284] <==
	{"level":"warn","ts":"2025-11-01T10:33:18.907799Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"125.634115ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6141109162177553293 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/coredns-66bc5c9577-jt729\" mod_revision:402 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-66bc5c9577-jt729\" value_size:5658 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-66bc5c9577-jt729\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-01T10:33:18.907824Z","caller":"traceutil/trace.go:172","msg":"trace[664005657] range","detail":"{range_begin:/registry/clusterroles; range_end:; response_count:0; response_revision:418; }","duration":"184.906584ms","start":"2025-11-01T10:33:18.722883Z","end":"2025-11-01T10:33:18.907790Z","steps":["trace[664005657] 'agreement among raft nodes before linearized reading'  (duration: 59.192665ms)","trace[664005657] 'range keys from in-memory index tree'  (duration: 125.597707ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T10:33:18.907851Z","caller":"traceutil/trace.go:172","msg":"trace[1906371261] linearizableReadLoop","detail":"{readStateIndex:446; appliedIndex:445; }","duration":"125.798541ms","start":"2025-11-01T10:33:18.782046Z","end":"2025-11-01T10:33:18.907845Z","steps":["trace[1906371261] 'read index received'  (duration: 16.776µs)","trace[1906371261] 'applied index is now lower than readState.Index'  (duration: 125.781295ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T10:33:18.907954Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"184.401506ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/system-node-critical\" limit:1 ","response":"range_response_count:1 size:442"}
	{"level":"info","ts":"2025-11-01T10:33:18.907973Z","caller":"traceutil/trace.go:172","msg":"trace[1666495431] range","detail":"{range_begin:/registry/priorityclasses/system-node-critical; range_end:; response_count:1; response_revision:419; }","duration":"184.421085ms","start":"2025-11-01T10:33:18.723545Z","end":"2025-11-01T10:33:18.907966Z","steps":["trace[1666495431] 'agreement among raft nodes before linearized reading'  (duration: 184.320979ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:33:18.908208Z","caller":"traceutil/trace.go:172","msg":"trace[561682606] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"329.28051ms","start":"2025-11-01T10:33:18.578898Z","end":"2025-11-01T10:33:18.908178Z","steps":["trace[561682606] 'process raft request'  (duration: 203.219701ms)","trace[561682606] 'compare'  (duration: 125.541092ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T10:33:18.908339Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T10:33:18.578879Z","time spent":"329.410527ms","remote":"127.0.0.1:35428","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5717,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/coredns-66bc5c9577-jt729\" mod_revision:402 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-66bc5c9577-jt729\" value_size:5658 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-66bc5c9577-jt729\" > >"}
	{"level":"info","ts":"2025-11-01T10:33:36.030188Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-01T10:33:36.030273Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-876158","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.174:2380"],"advertise-client-urls":["https://192.168.72.174:2379"]}
	{"level":"error","ts":"2025-11-01T10:33:36.030364Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T10:33:36.032079Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T10:33:36.032140Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:33:36.032165Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"6939dc92cf6d5539","current-leader-member-id":"6939dc92cf6d5539"}
	{"level":"info","ts":"2025-11-01T10:33:36.032249Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-01T10:33:36.032293Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T10:33:36.032364Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T10:33:36.032370Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:33:36.032366Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-01T10:33:36.032409Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.72.174:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T10:33:36.032416Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.72.174:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T10:33:36.032421Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.72.174:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:33:36.036395Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.72.174:2380"}
	{"level":"info","ts":"2025-11-01T10:33:36.036639Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.72.174:2380"}
	{"level":"error","ts":"2025-11-01T10:33:36.036561Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.72.174:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:33:36.036725Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-876158","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.174:2380"],"advertise-client-urls":["https://192.168.72.174:2379"]}
	
	
	==> kernel <==
	 10:34:02 up 2 min,  0 users,  load average: 1.09, 0.50, 0.19
	Linux pause-876158 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Oct 28 16:58:05 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [2fec3be4d90e193581410f134f96ec6279009038eee2bdbe11637a437d72eaf0] <==
	I1101 10:33:25.956422       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I1101 10:33:25.956428       1 controller.go:132] Ending legacy_token_tracking_controller
	I1101 10:33:25.956431       1 controller.go:133] Shutting down legacy_token_tracking_controller
	I1101 10:33:25.956441       1 customresource_discovery_controller.go:332] Shutting down DiscoveryController
	I1101 10:33:25.956986       1 dynamic_cafile_content.go:175] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1101 10:33:25.957222       1 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1101 10:33:25.957934       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I1101 10:33:25.957991       1 dynamic_cafile_content.go:175] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1101 10:33:25.958032       1 object_count_tracker.go:141] "StorageObjectCountTracker pruner is exiting"
	I1101 10:33:25.957940       1 local_available_controller.go:172] Shutting down LocalAvailability controller
	I1101 10:33:25.959243       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1101 10:33:25.959303       1 secure_serving.go:259] Stopped listening on [::]:8443
	I1101 10:33:25.959507       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I1101 10:33:25.960051       1 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1101 10:33:25.957950       1 system_namespaces_controller.go:76] Shutting down system namespaces controller
	I1101 10:33:25.958009       1 controller.go:84] Shutting down OpenAPI AggregationController
	I1101 10:33:25.956002       1 controller.go:157] Shutting down quota evaluator
	I1101 10:33:25.960342       1 controller.go:176] quota evaluator worker shutdown
	I1101 10:33:25.958134       1 cluster_authentication_trust_controller.go:482] Shutting down cluster_authentication_trust_controller controller
	I1101 10:33:25.960363       1 controller.go:176] quota evaluator worker shutdown
	I1101 10:33:25.960369       1 controller.go:176] quota evaluator worker shutdown
	I1101 10:33:25.960375       1 controller.go:176] quota evaluator worker shutdown
	I1101 10:33:25.960381       1 controller.go:176] quota evaluator worker shutdown
	I1101 10:33:25.958145       1 crd_finalizer.go:281] Shutting down CRDFinalizer
	I1101 10:33:25.959161       1 repairip.go:246] Shutting down ipallocator-repair-controller
	
	
	==> kube-apiserver [688c471e89968dc1bdbbf0813fa854bc949909909d1c810fd980416e633315dc] <==
	I1101 10:33:43.724726       1 aggregator.go:171] initial CRD sync complete...
	I1101 10:33:43.724789       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 10:33:43.724808       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:33:43.724823       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:33:43.749660       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 10:33:43.752144       1 policy_source.go:240] refreshing policies
	I1101 10:33:43.784018       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 10:33:43.787865       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 10:33:43.789012       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 10:33:43.789185       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 10:33:43.789353       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 10:33:43.789410       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:33:43.795396       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 10:33:43.802306       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 10:33:43.840408       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:33:44.557039       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:33:44.590963       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1101 10:33:45.722906       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.72.174]
	I1101 10:33:45.726997       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:33:45.986689       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:33:46.308740       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:33:46.382390       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 10:33:46.446412       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:33:46.458300       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:33:53.553203       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [5664f4cdb8b0cb0cd1151361649408884d21ad30ec13a20d477e8e43b30f8b6c] <==
	I1101 10:33:21.203539       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 10:33:21.204485       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:33:21.207739       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 10:33:21.219775       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 10:33:21.220057       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 10:33:21.221732       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 10:33:21.222122       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 10:33:21.222207       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 10:33:21.224255       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1101 10:33:21.228182       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 10:33:21.230576       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 10:33:21.231930       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1101 10:33:21.232009       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 10:33:21.235724       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 10:33:21.237960       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:33:21.239178       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 10:33:21.248462       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 10:33:21.254704       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 10:33:21.259039       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 10:33:21.268054       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 10:33:21.269520       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 10:33:21.269688       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:33:21.269704       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:33:21.269743       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:33:21.313541       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [f171ed0fbd1f5bf2ae64c6e5b4951bdf2e68b0fc7c1606c22c3c9e7533f915b0] <==
	I1101 10:33:47.441839       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 10:33:47.441913       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 10:33:47.441948       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 10:33:47.441952       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 10:33:47.441957       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 10:33:47.444576       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:33:47.448022       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1101 10:33:47.448088       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1101 10:33:47.448273       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 10:33:47.448298       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:33:47.448308       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:33:47.448315       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:33:47.450309       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 10:33:47.450448       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:33:47.450736       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 10:33:47.450840       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-876158"
	I1101 10:33:47.450886       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 10:33:47.451782       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 10:33:47.451967       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 10:33:47.457562       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 10:33:47.457659       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 10:33:47.458761       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 10:33:47.461519       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 10:33:47.463914       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 10:33:47.467247       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	
	
	==> kube-proxy [265c57d85cf516e7f15b9f4718451a80368f2803140064598a5700c7c6183070] <==
	I1101 10:33:45.224393       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:33:45.325788       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:33:45.325927       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.72.174"]
	E1101 10:33:45.326079       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:33:45.375068       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1101 10:33:45.375150       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 10:33:45.375183       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:33:45.387098       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:33:45.387468       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:33:45.387487       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:33:45.393687       1 config.go:200] "Starting service config controller"
	I1101 10:33:45.393737       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:33:45.393760       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:33:45.393764       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:33:45.393774       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:33:45.393777       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:33:45.395774       1 config.go:309] "Starting node config controller"
	I1101 10:33:45.395805       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:33:45.395812       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:33:45.493878       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:33:45.493921       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:33:45.493939       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [e318598b1cdc57c0bf3db59c8f6b70bece9cb0fd72f214ac315aaa82863aa6c5] <==
	I1101 10:33:16.352036       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:33:17.852716       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:33:17.852827       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.72.174"]
	E1101 10:33:17.853144       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:33:17.897871       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1101 10:33:17.897947       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 10:33:17.897969       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:33:17.914216       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:33:17.915383       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:33:17.915424       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:33:17.929768       1 config.go:200] "Starting service config controller"
	I1101 10:33:17.929799       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:33:17.929817       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:33:17.929821       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:33:17.929868       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:33:17.929891       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:33:17.932367       1 config.go:309] "Starting node config controller"
	I1101 10:33:17.932396       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:33:17.932404       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:33:18.030651       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:33:18.030669       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:33:18.030683       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5c71a895a06e893a8df5e00f8b827024417d6f59c1ce8d00280754ceff495603] <==
	I1101 10:33:39.937174       1 serving.go:386] Generated self-signed cert in-memory
	W1101 10:33:43.665909       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 10:33:43.666674       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 10:33:43.666769       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 10:33:43.666801       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 10:33:43.746264       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:33:43.746311       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:33:43.748936       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:33:43.748968       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:33:43.749149       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:33:43.749200       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:33:43.851096       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [79aebb8f8da407f0f84efde4b63cf2e47e9eb50d7ced3ee123d1953b0dbd8510] <==
	I1101 10:33:15.920213       1 serving.go:386] Generated self-signed cert in-memory
	I1101 10:33:17.931720       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:33:17.931758       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:33:18.259875       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 10:33:18.259923       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 10:33:18.259977       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:33:18.259984       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:33:18.259995       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:33:18.260002       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:33:18.261484       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:33:18.261546       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:33:18.360792       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:33:18.360896       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 10:33:18.361000       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:33:36.185407       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1101 10:33:36.185509       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1101 10:33:36.185643       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1101 10:33:36.185909       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:33:36.186008       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1101 10:33:36.186051       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1101 10:33:36.186111       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 01 10:33:43 pause-876158 kubelet[3927]: I1101 10:33:43.772521    3927 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-876158"
	Nov 01 10:33:43 pause-876158 kubelet[3927]: I1101 10:33:43.791678    3927 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-876158"
	Nov 01 10:33:43 pause-876158 kubelet[3927]: E1101 10:33:43.813567    3927 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-876158\" already exists" pod="kube-system/kube-apiserver-pause-876158"
	Nov 01 10:33:43 pause-876158 kubelet[3927]: E1101 10:33:43.815736    3927 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-876158\" already exists" pod="kube-system/kube-controller-manager-pause-876158"
	Nov 01 10:33:43 pause-876158 kubelet[3927]: I1101 10:33:43.815781    3927 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-876158"
	Nov 01 10:33:43 pause-876158 kubelet[3927]: E1101 10:33:43.834408    3927 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-876158\" already exists" pod="kube-system/kube-scheduler-pause-876158"
	Nov 01 10:33:43 pause-876158 kubelet[3927]: I1101 10:33:43.834472    3927 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-876158"
	Nov 01 10:33:43 pause-876158 kubelet[3927]: I1101 10:33:43.853468    3927 kubelet_node_status.go:124] "Node was previously registered" node="pause-876158"
	Nov 01 10:33:43 pause-876158 kubelet[3927]: I1101 10:33:43.853581    3927 kubelet_node_status.go:78] "Successfully registered node" node="pause-876158"
	Nov 01 10:33:43 pause-876158 kubelet[3927]: I1101 10:33:43.853730    3927 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 01 10:33:43 pause-876158 kubelet[3927]: E1101 10:33:43.858026    3927 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-876158\" already exists" pod="kube-system/etcd-pause-876158"
	Nov 01 10:33:43 pause-876158 kubelet[3927]: I1101 10:33:43.858076    3927 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-876158"
	Nov 01 10:33:43 pause-876158 kubelet[3927]: I1101 10:33:43.859436    3927 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 01 10:33:43 pause-876158 kubelet[3927]: E1101 10:33:43.882836    3927 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-876158\" already exists" pod="kube-system/kube-apiserver-pause-876158"
	Nov 01 10:33:44 pause-876158 kubelet[3927]: I1101 10:33:44.472955    3927 apiserver.go:52] "Watching apiserver"
	Nov 01 10:33:44 pause-876158 kubelet[3927]: I1101 10:33:44.493503    3927 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 01 10:33:44 pause-876158 kubelet[3927]: I1101 10:33:44.548097    3927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a3154768-c6f0-4b8f-9a95-c6f6fb16dc98-xtables-lock\") pod \"kube-proxy-4fktf\" (UID: \"a3154768-c6f0-4b8f-9a95-c6f6fb16dc98\") " pod="kube-system/kube-proxy-4fktf"
	Nov 01 10:33:44 pause-876158 kubelet[3927]: I1101 10:33:44.548221    3927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3154768-c6f0-4b8f-9a95-c6f6fb16dc98-lib-modules\") pod \"kube-proxy-4fktf\" (UID: \"a3154768-c6f0-4b8f-9a95-c6f6fb16dc98\") " pod="kube-system/kube-proxy-4fktf"
	Nov 01 10:33:44 pause-876158 kubelet[3927]: I1101 10:33:44.778900    3927 scope.go:117] "RemoveContainer" containerID="e318598b1cdc57c0bf3db59c8f6b70bece9cb0fd72f214ac315aaa82863aa6c5"
	Nov 01 10:33:44 pause-876158 kubelet[3927]: I1101 10:33:44.780076    3927 scope.go:117] "RemoveContainer" containerID="2baabf87531925ad29aa58958a9a82ab31e2b4ea6190e885b3d78bb57edab584"
	Nov 01 10:33:48 pause-876158 kubelet[3927]: E1101 10:33:48.640312    3927 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761993228638134044  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Nov 01 10:33:48 pause-876158 kubelet[3927]: E1101 10:33:48.640377    3927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761993228638134044  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Nov 01 10:33:53 pause-876158 kubelet[3927]: I1101 10:33:53.494140    3927 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 01 10:33:58 pause-876158 kubelet[3927]: E1101 10:33:58.644461    3927 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761993238643334442  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Nov 01 10:33:58 pause-876158 kubelet[3927]: E1101 10:33:58.644546    3927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761993238643334442  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-876158 -n pause-876158
helpers_test.go:269: (dbg) Run:  kubectl --context pause-876158 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-876158 -n pause-876158
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-876158 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-876158 logs -n 25: (1.604890432s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────────────
──┐
	│ COMMAND │                                                                                                                    ARGS                                                                                                                     │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────────────
──┤
	│ ssh     │ -p cilium-543676 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                   │ cilium-543676             │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │                     │
	│ ssh     │ -p cilium-543676 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                              │ cilium-543676             │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │                     │
	│ ssh     │ -p cilium-543676 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                        │ cilium-543676             │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │                     │
	│ ssh     │ -p cilium-543676 sudo cri-dockerd --version                                                                                                                                                                                                 │ cilium-543676             │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │                     │
	│ ssh     │ -p cilium-543676 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                   │ cilium-543676             │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │                     │
	│ ssh     │ -p cilium-543676 sudo systemctl cat containerd --no-pager                                                                                                                                                                                   │ cilium-543676             │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │                     │
	│ ssh     │ -p cilium-543676 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                            │ cilium-543676             │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │                     │
	│ ssh     │ -p cilium-543676 sudo cat /etc/containerd/config.toml                                                                                                                                                                                       │ cilium-543676             │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │                     │
	│ ssh     │ -p cilium-543676 sudo containerd config dump                                                                                                                                                                                                │ cilium-543676             │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │                     │
	│ ssh     │ -p cilium-543676 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                         │ cilium-543676             │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │                     │
	│ ssh     │ -p cilium-543676 sudo systemctl cat crio --no-pager                                                                                                                                                                                         │ cilium-543676             │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │                     │
	│ ssh     │ -p cilium-543676 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                               │ cilium-543676             │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │                     │
	│ ssh     │ -p cilium-543676 sudo crio config                                                                                                                                                                                                           │ cilium-543676             │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │                     │
	│ delete  │ -p cilium-543676                                                                                                                                                                                                                            │ cilium-543676             │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │ 01 Nov 25 10:31 UTC │
	│ start   │ -p guest-651909 --no-kubernetes --driver=kvm2  --container-runtime=crio                                                                                                                                                                     │ guest-651909              │ jenkins │ v1.37.0 │ 01 Nov 25 10:31 UTC │ 01 Nov 25 10:32 UTC │
	│ ssh     │ -p NoKubernetes-146388 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                     │ NoKubernetes-146388       │ jenkins │ v1.37.0 │ 01 Nov 25 10:32 UTC │                     │
	│ delete  │ -p NoKubernetes-146388                                                                                                                                                                                                                      │ NoKubernetes-146388       │ jenkins │ v1.37.0 │ 01 Nov 25 10:32 UTC │ 01 Nov 25 10:32 UTC │
	│ start   │ -p cert-expiration-383589 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio                                                                                                                                        │ cert-expiration-383589    │ jenkins │ v1.37.0 │ 01 Nov 25 10:32 UTC │ 01 Nov 25 10:33 UTC │
	│ start   │ -p pause-876158 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                                              │ pause-876158              │ jenkins │ v1.37.0 │ 01 Nov 25 10:32 UTC │ 01 Nov 25 10:33 UTC │
	│ start   │ -p force-systemd-flag-706270 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                                                                                   │ force-systemd-flag-706270 │ jenkins │ v1.37.0 │ 01 Nov 25 10:32 UTC │ 01 Nov 25 10:33 UTC │
	│ delete  │ -p force-systemd-env-112765                                                                                                                                                                                                                 │ force-systemd-env-112765  │ jenkins │ v1.37.0 │ 01 Nov 25 10:32 UTC │ 01 Nov 25 10:32 UTC │
	│ start   │ -p cert-options-842807 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio                     │ cert-options-842807       │ jenkins │ v1.37.0 │ 01 Nov 25 10:32 UTC │                     │
	│ ssh     │ force-systemd-flag-706270 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                        │ force-systemd-flag-706270 │ jenkins │ v1.37.0 │ 01 Nov 25 10:33 UTC │ 01 Nov 25 10:33 UTC │
	│ delete  │ -p force-systemd-flag-706270                                                                                                                                                                                                                │ force-systemd-flag-706270 │ jenkins │ v1.37.0 │ 01 Nov 25 10:33 UTC │ 01 Nov 25 10:33 UTC │
	│ start   │ -p old-k8s-version-152855 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-152855    │ jenkins │ v1.37.0 │ 01 Nov 25 10:33 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────────────
──┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 10:33:59
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 10:33:59.566185  380597 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:33:59.566484  380597 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:33:59.566495  380597 out.go:374] Setting ErrFile to fd 2...
	I1101 10:33:59.566499  380597 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:33:59.566713  380597 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-344560/.minikube/bin
	I1101 10:33:59.567394  380597 out.go:368] Setting JSON to false
	I1101 10:33:59.568719  380597 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8188,"bootTime":1761985052,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:33:59.568791  380597 start.go:143] virtualization: kvm guest
	I1101 10:33:59.571101  380597 out.go:179] * [old-k8s-version-152855] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:33:59.574133  380597 notify.go:221] Checking for updates...
	I1101 10:33:59.574149  380597 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 10:33:59.575954  380597 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:33:59.577415  380597 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-344560/kubeconfig
	I1101 10:33:59.578673  380597 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-344560/.minikube
	I1101 10:33:59.580122  380597 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:33:59.581241  380597 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:33:59.583161  380597 config.go:182] Loaded profile config "cert-expiration-383589": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:33:59.583339  380597 config.go:182] Loaded profile config "cert-options-842807": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:33:59.583477  380597 config.go:182] Loaded profile config "guest-651909": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1101 10:33:59.583669  380597 config.go:182] Loaded profile config "pause-876158": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:33:59.583803  380597 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:33:59.632578  380597 out.go:179] * Using the kvm2 driver based on user configuration
	I1101 10:33:59.633800  380597 start.go:309] selected driver: kvm2
	I1101 10:33:59.633821  380597 start.go:930] validating driver "kvm2" against <nil>
	I1101 10:33:59.633848  380597 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:33:59.634629  380597 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 10:33:59.634956  380597 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:33:59.635004  380597 cni.go:84] Creating CNI manager for ""
	I1101 10:33:59.635077  380597 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 10:33:59.635089  380597 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1101 10:33:59.635156  380597 start.go:353] cluster config:
	{Name:old-k8s-version-152855 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-152855 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHA
gentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:33:59.635294  380597 iso.go:125] acquiring lock: {Name:mkc74493fbbc2007c645c4ed6349cf76e7fb2185 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:33:59.636786  380597 out.go:179] * Starting "old-k8s-version-152855" primary control-plane node in "old-k8s-version-152855" cluster
	I1101 10:33:59.637746  380597 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 10:33:59.637785  380597 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21832-344560/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1101 10:33:59.637795  380597 cache.go:59] Caching tarball of preloaded images
	I1101 10:33:59.637893  380597 preload.go:233] Found /home/jenkins/minikube-integration/21832-344560/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 10:33:59.637905  380597 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1101 10:33:59.637995  380597 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/old-k8s-version-152855/config.json ...
	I1101 10:33:59.638015  380597 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/old-k8s-version-152855/config.json: {Name:mk878f27e4eb1aac282516f51b4962ddf5db22b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:33:59.638165  380597 start.go:360] acquireMachinesLock for old-k8s-version-152855: {Name:mkd221a68334bc82c567a9a06c8563af1e1c38bd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 10:33:59.638205  380597 start.go:364] duration metric: took 23.628µs to acquireMachinesLock for "old-k8s-version-152855"
	I1101 10:33:59.638226  380597 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-152855 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-152855 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:33:59.638279  380597 start.go:125] createHost starting for "" (driver="kvm2")
	I1101 10:33:58.934335  379731 pod_ready.go:94] pod "etcd-pause-876158" is "Ready"
	I1101 10:33:58.934368  379731 pod_ready.go:86] duration metric: took 5.009339561s for pod "etcd-pause-876158" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:33:58.938279  379731 pod_ready.go:83] waiting for pod "kube-apiserver-pause-876158" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:33:58.945645  379731 pod_ready.go:94] pod "kube-apiserver-pause-876158" is "Ready"
	I1101 10:33:58.945680  379731 pod_ready.go:86] duration metric: took 7.374715ms for pod "kube-apiserver-pause-876158" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:33:58.949241  379731 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-876158" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:33:58.956851  379731 pod_ready.go:94] pod "kube-controller-manager-pause-876158" is "Ready"
	I1101 10:33:58.956918  379731 pod_ready.go:86] duration metric: took 7.621948ms for pod "kube-controller-manager-pause-876158" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:33:58.961442  379731 pod_ready.go:83] waiting for pod "kube-proxy-4fktf" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:33:59.131769  379731 pod_ready.go:94] pod "kube-proxy-4fktf" is "Ready"
	I1101 10:33:59.131799  379731 pod_ready.go:86] duration metric: took 170.327767ms for pod "kube-proxy-4fktf" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:33:59.331320  379731 pod_ready.go:83] waiting for pod "kube-scheduler-pause-876158" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:33:59.731352  379731 pod_ready.go:94] pod "kube-scheduler-pause-876158" is "Ready"
	I1101 10:33:59.731384  379731 pod_ready.go:86] duration metric: took 400.029736ms for pod "kube-scheduler-pause-876158" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 10:33:59.731401  379731 pod_ready.go:40] duration metric: took 12.827018716s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 10:33:59.801721  379731 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 10:33:59.804205  379731 out.go:179] * Done! kubectl is now configured to use "pause-876158" cluster and "default" namespace by default
	I1101 10:33:57.971699  379891 crio.go:462] duration metric: took 1.979056466s to copy over tarball
	I1101 10:33:57.971835  379891 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 10:33:59.986064  379891 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.014199506s)
	I1101 10:33:59.986083  379891 crio.go:469] duration metric: took 2.014359753s to extract the tarball
	I1101 10:33:59.986090  379891 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 10:34:00.033878  379891 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:34:00.092302  379891 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:34:00.092315  379891 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:34:00.092322  379891 kubeadm.go:935] updating node { 192.168.83.139 8555 v1.34.1 crio true true} ...
	I1101 10:34:00.092407  379891 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=cert-options-842807 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.139
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:cert-options-842807 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 10:34:00.092472  379891 ssh_runner.go:195] Run: crio config
	I1101 10:34:00.155759  379891 cni.go:84] Creating CNI manager for ""
	I1101 10:34:00.155782  379891 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 10:34:00.155808  379891 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:34:00.155839  379891 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.139 APIServerPort:8555 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-options-842807 NodeName:cert-options-842807 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.139"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.139 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:34:00.156038  379891 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.139
	  bindPort: 8555
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-options-842807"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.83.139"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.139"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8555
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:34:00.156134  379891 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:34:00.175317  379891 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:34:00.175385  379891 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:34:00.192718  379891 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1101 10:34:00.219901  379891 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:34:00.249607  379891 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1101 10:34:00.277554  379891 ssh_runner.go:195] Run: grep 192.168.83.139	control-plane.minikube.internal$ /etc/hosts
	I1101 10:34:00.282467  379891 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.139	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:34:00.299162  379891 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:34:00.473519  379891 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:34:00.525959  379891 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/cert-options-842807 for IP: 192.168.83.139
	I1101 10:34:00.525974  379891 certs.go:195] generating shared ca certs ...
	I1101 10:34:00.525995  379891 certs.go:227] acquiring lock for ca certs: {Name:mkba0fe79f6b0ed99353299aaf34c6fbc547c6f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:34:00.526206  379891 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-344560/.minikube/ca.key
	I1101 10:34:00.526260  379891 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-344560/.minikube/proxy-client-ca.key
	I1101 10:34:00.526269  379891 certs.go:257] generating profile certs ...
	I1101 10:34:00.526356  379891 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/cert-options-842807/client.key
	I1101 10:34:00.526372  379891 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/cert-options-842807/client.crt with IP's: []
	I1101 10:34:00.716180  379891 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/cert-options-842807/client.crt ...
	I1101 10:34:00.716198  379891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/cert-options-842807/client.crt: {Name:mkd3427c467028d39833933fa3d239b5a8f3f5fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:34:00.716376  379891 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/cert-options-842807/client.key ...
	I1101 10:34:00.716385  379891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/cert-options-842807/client.key: {Name:mk2c7de82a4c7570a84a154a2826550b62f505a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:34:00.716502  379891 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/cert-options-842807/apiserver.key.f115299f
	I1101 10:34:00.716515  379891 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/cert-options-842807/apiserver.crt.f115299f with IP's: [127.0.0.1 192.168.15.15 10.96.0.1 127.0.0.1 10.0.0.1 192.168.83.139]
	I1101 10:34:00.969533  379891 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/cert-options-842807/apiserver.crt.f115299f ...
	I1101 10:34:00.969553  379891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/cert-options-842807/apiserver.crt.f115299f: {Name:mkeeb6216946f4ffb1c16e79c5b2f8d706a51bba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:34:00.969736  379891 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/cert-options-842807/apiserver.key.f115299f ...
	I1101 10:34:00.969744  379891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/cert-options-842807/apiserver.key.f115299f: {Name:mk5726042eb95b314739e63b575c9b37d254d235 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:34:00.969818  379891 certs.go:382] copying /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/cert-options-842807/apiserver.crt.f115299f -> /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/cert-options-842807/apiserver.crt
	I1101 10:34:00.969910  379891 certs.go:386] copying /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/cert-options-842807/apiserver.key.f115299f -> /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/cert-options-842807/apiserver.key
	I1101 10:34:00.969961  379891 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/cert-options-842807/proxy-client.key
	I1101 10:34:00.970001  379891 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/cert-options-842807/proxy-client.crt with IP's: []
	I1101 10:34:01.096688  379891 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/cert-options-842807/proxy-client.crt ...
	I1101 10:34:01.096707  379891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/cert-options-842807/proxy-client.crt: {Name:mk18607a4ab3a293ca093f384b2f3571307809b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:34:01.096939  379891 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/cert-options-842807/proxy-client.key ...
	I1101 10:34:01.096952  379891 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/cert-options-842807/proxy-client.key: {Name:mka4ac4a61321bfcbbd8e5ee1f9c4c5ff4c3386b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:34:01.097132  379891 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/348518.pem (1338 bytes)
	W1101 10:34:01.097167  379891 certs.go:480] ignoring /home/jenkins/minikube-integration/21832-344560/.minikube/certs/348518_empty.pem, impossibly tiny 0 bytes
	I1101 10:34:01.097173  379891 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:34:01.097192  379891 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/ca.pem (1082 bytes)
	I1101 10:34:01.097209  379891 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:34:01.097227  379891 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/key.pem (1679 bytes)
	I1101 10:34:01.097261  379891 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-344560/.minikube/files/etc/ssl/certs/3485182.pem (1708 bytes)
	I1101 10:34:01.097905  379891 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:33:59.639824  380597 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1101 10:33:59.640099  380597 start.go:159] libmachine.API.Create for "old-k8s-version-152855" (driver="kvm2")
	I1101 10:33:59.640144  380597 client.go:173] LocalClient.Create starting
	I1101 10:33:59.640222  380597 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21832-344560/.minikube/certs/ca.pem
	I1101 10:33:59.640271  380597 main.go:143] libmachine: Decoding PEM data...
	I1101 10:33:59.640303  380597 main.go:143] libmachine: Parsing certificate...
	I1101 10:33:59.640401  380597 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21832-344560/.minikube/certs/cert.pem
	I1101 10:33:59.640435  380597 main.go:143] libmachine: Decoding PEM data...
	I1101 10:33:59.640453  380597 main.go:143] libmachine: Parsing certificate...
	I1101 10:33:59.641019  380597 main.go:143] libmachine: creating domain...
	I1101 10:33:59.641038  380597 main.go:143] libmachine: creating network...
	I1101 10:33:59.642778  380597 main.go:143] libmachine: found existing default network
	I1101 10:33:59.643043  380597 main.go:143] libmachine: <network connections='4'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 10:33:59.644368  380597 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001bb5920}
	I1101 10:33:59.644499  380597 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-old-k8s-version-152855</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 10:33:59.651303  380597 main.go:143] libmachine: creating private network mk-old-k8s-version-152855 192.168.39.0/24...
	I1101 10:33:59.748725  380597 main.go:143] libmachine: private network mk-old-k8s-version-152855 192.168.39.0/24 created
	I1101 10:33:59.749053  380597 main.go:143] libmachine: <network>
	  <name>mk-old-k8s-version-152855</name>
	  <uuid>3cb6493d-cf7d-4b1d-9a03-720a92036f4a</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:f3:59:34'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 10:33:59.749089  380597 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21832-344560/.minikube/machines/old-k8s-version-152855 ...
	I1101 10:33:59.749130  380597 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21832-344560/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso
	I1101 10:33:59.749144  380597 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21832-344560/.minikube
	I1101 10:33:59.749222  380597 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21832-344560/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21832-344560/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso...
	I1101 10:34:00.052899  380597 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21832-344560/.minikube/machines/old-k8s-version-152855/id_rsa...
	I1101 10:34:00.310841  380597 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21832-344560/.minikube/machines/old-k8s-version-152855/old-k8s-version-152855.rawdisk...
	I1101 10:34:00.310914  380597 main.go:143] libmachine: Writing magic tar header
	I1101 10:34:00.310945  380597 main.go:143] libmachine: Writing SSH key tar header
	I1101 10:34:00.311057  380597 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21832-344560/.minikube/machines/old-k8s-version-152855 ...
	I1101 10:34:00.311160  380597 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21832-344560/.minikube/machines/old-k8s-version-152855
	I1101 10:34:00.311221  380597 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21832-344560/.minikube/machines/old-k8s-version-152855 (perms=drwx------)
	I1101 10:34:00.311253  380597 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21832-344560/.minikube/machines
	I1101 10:34:00.311275  380597 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21832-344560/.minikube/machines (perms=drwxr-xr-x)
	I1101 10:34:00.311294  380597 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21832-344560/.minikube
	I1101 10:34:00.311306  380597 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21832-344560/.minikube (perms=drwxr-xr-x)
	I1101 10:34:00.311316  380597 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21832-344560
	I1101 10:34:00.311329  380597 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21832-344560 (perms=drwxrwxr-x)
	I1101 10:34:00.311339  380597 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1101 10:34:00.311349  380597 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1101 10:34:00.311359  380597 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1101 10:34:00.311369  380597 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1101 10:34:00.311378  380597 main.go:143] libmachine: checking permissions on dir: /home
	I1101 10:34:00.311385  380597 main.go:143] libmachine: skipping /home - not owner
	I1101 10:34:00.311390  380597 main.go:143] libmachine: defining domain...
	I1101 10:34:00.312688  380597 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>old-k8s-version-152855</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21832-344560/.minikube/machines/old-k8s-version-152855/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21832-344560/.minikube/machines/old-k8s-version-152855/old-k8s-version-152855.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-old-k8s-version-152855'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1101 10:34:00.317947  380597 main.go:143] libmachine: domain old-k8s-version-152855 has defined MAC address 52:54:00:c1:69:a6 in network default
	I1101 10:34:00.318671  380597 main.go:143] libmachine: domain old-k8s-version-152855 has defined MAC address 52:54:00:bb:2b:3d in network mk-old-k8s-version-152855
	I1101 10:34:00.318692  380597 main.go:143] libmachine: starting domain...
	I1101 10:34:00.318696  380597 main.go:143] libmachine: ensuring networks are active...
	I1101 10:34:00.319691  380597 main.go:143] libmachine: Ensuring network default is active
	I1101 10:34:00.320195  380597 main.go:143] libmachine: Ensuring network mk-old-k8s-version-152855 is active
	I1101 10:34:00.321175  380597 main.go:143] libmachine: getting domain XML...
	I1101 10:34:00.322562  380597 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>old-k8s-version-152855</name>
	  <uuid>e42be5d3-d7a0-43ba-8ef7-f88550776af3</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21832-344560/.minikube/machines/old-k8s-version-152855/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21832-344560/.minikube/machines/old-k8s-version-152855/old-k8s-version-152855.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:bb:2b:3d'/>
	      <source network='mk-old-k8s-version-152855'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:c1:69:a6'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1101 10:34:02.169264  380597 main.go:143] libmachine: waiting for domain to start...
	I1101 10:34:02.170956  380597 main.go:143] libmachine: domain is now running
	I1101 10:34:02.170982  380597 main.go:143] libmachine: waiting for IP...
	I1101 10:34:02.172151  380597 main.go:143] libmachine: domain old-k8s-version-152855 has defined MAC address 52:54:00:bb:2b:3d in network mk-old-k8s-version-152855
	I1101 10:34:02.172950  380597 main.go:143] libmachine: no network interface addresses found for domain old-k8s-version-152855 (source=lease)
	I1101 10:34:02.172973  380597 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:34:02.173347  380597 main.go:143] libmachine: unable to find current IP address of domain old-k8s-version-152855 in network mk-old-k8s-version-152855 (interfaces detected: [])
	I1101 10:34:02.173410  380597 retry.go:31] will retry after 202.991762ms: waiting for domain to come up
	I1101 10:34:02.378099  380597 main.go:143] libmachine: domain old-k8s-version-152855 has defined MAC address 52:54:00:bb:2b:3d in network mk-old-k8s-version-152855
	I1101 10:34:02.378764  380597 main.go:143] libmachine: no network interface addresses found for domain old-k8s-version-152855 (source=lease)
	I1101 10:34:02.378782  380597 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:34:02.379225  380597 main.go:143] libmachine: unable to find current IP address of domain old-k8s-version-152855 in network mk-old-k8s-version-152855 (interfaces detected: [])
	I1101 10:34:02.379268  380597 retry.go:31] will retry after 248.406587ms: waiting for domain to come up
	I1101 10:34:02.629684  380597 main.go:143] libmachine: domain old-k8s-version-152855 has defined MAC address 52:54:00:bb:2b:3d in network mk-old-k8s-version-152855
	I1101 10:34:02.630417  380597 main.go:143] libmachine: no network interface addresses found for domain old-k8s-version-152855 (source=lease)
	I1101 10:34:02.630450  380597 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:34:02.630853  380597 main.go:143] libmachine: unable to find current IP address of domain old-k8s-version-152855 in network mk-old-k8s-version-152855 (interfaces detected: [])
	I1101 10:34:02.630917  380597 retry.go:31] will retry after 301.818723ms: waiting for domain to come up
	I1101 10:34:02.934645  380597 main.go:143] libmachine: domain old-k8s-version-152855 has defined MAC address 52:54:00:bb:2b:3d in network mk-old-k8s-version-152855
	I1101 10:34:02.935372  380597 main.go:143] libmachine: no network interface addresses found for domain old-k8s-version-152855 (source=lease)
	I1101 10:34:02.935390  380597 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:34:02.935843  380597 main.go:143] libmachine: unable to find current IP address of domain old-k8s-version-152855 in network mk-old-k8s-version-152855 (interfaces detected: [])
	I1101 10:34:02.935897  380597 retry.go:31] will retry after 391.899171ms: waiting for domain to come up
	I1101 10:34:03.329802  380597 main.go:143] libmachine: domain old-k8s-version-152855 has defined MAC address 52:54:00:bb:2b:3d in network mk-old-k8s-version-152855
	I1101 10:34:03.330571  380597 main.go:143] libmachine: no network interface addresses found for domain old-k8s-version-152855 (source=lease)
	I1101 10:34:03.330587  380597 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:34:03.331057  380597 main.go:143] libmachine: unable to find current IP address of domain old-k8s-version-152855 in network mk-old-k8s-version-152855 (interfaces detected: [])
	I1101 10:34:03.331111  380597 retry.go:31] will retry after 707.062609ms: waiting for domain to come up
	I1101 10:34:04.040189  380597 main.go:143] libmachine: domain old-k8s-version-152855 has defined MAC address 52:54:00:bb:2b:3d in network mk-old-k8s-version-152855
	I1101 10:34:04.041009  380597 main.go:143] libmachine: no network interface addresses found for domain old-k8s-version-152855 (source=lease)
	I1101 10:34:04.041032  380597 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:34:04.041493  380597 main.go:143] libmachine: unable to find current IP address of domain old-k8s-version-152855 in network mk-old-k8s-version-152855 (interfaces detected: [])
	I1101 10:34:04.041542  380597 retry.go:31] will retry after 768.54709ms: waiting for domain to come up
	
	
	==> CRI-O <==
	Nov 01 10:34:04 pause-876158 crio[2790]: time="2025-11-01 10:34:04.942209000Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761993244942174100,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=82a9f1cb-bd19-4905-9cd7-5820f7ea7753 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:34:04 pause-876158 crio[2790]: time="2025-11-01 10:34:04.943212613Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4bf4266b-34a4-4d97-8649-dbd654e6e5fa name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:34:04 pause-876158 crio[2790]: time="2025-11-01 10:34:04.943316542Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4bf4266b-34a4-4d97-8649-dbd654e6e5fa name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:34:04 pause-876158 crio[2790]: time="2025-11-01 10:34:04.943720767Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:265c57d85cf516e7f15b9f4718451a80368f2803140064598a5700c7c6183070,PodSandboxId:3daf63d94c95af4b6168e90e8e3b1e9a2a31691b3e526f5ef65f215b6c1d9c35,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761993224833858308,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4fktf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3154768-c6f0-4b8f-9a95-c6f6fb16dc98,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c48282534e030364f2f8c5f1cded5e59cd875050e613ba6d0bf233bb1692286c,PodSandboxId:15c35f8a35a17a5449614d0ae7360cbcca44145318b3b5a633a16fe226079ae3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761993224825434388,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-jt729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e8f148-6a9d-427d-a523-68a579131ec6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f171ed0fbd1f5bf2ae64c6e5b4951bdf2e68b0fc7c1606c22c3c9e7533f915b0,PodSandboxId:cfe6db88f50c404a911e999b66ef5f1daf00a42a24c744481c0560f0e1975d3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761993219073231006,Labels:map[string]string{io.kubernetes.
container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f25a81ea579abde52be137a9a148994e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:688c471e89968dc1bdbbf0813fa854bc949909909d1c810fd980416e633315dc,PodSandboxId:4674f9583203380588e79c6f84e891e2eb0eb002a292d75a84206888895d67a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108
c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761993219035168734,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949ecae0312f4c3b69405f37e19e08f8,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c71a895a06e893a8df5e00f8b827024417d6f59c1ce8d00280754ceff495603,PodSandboxId:c17725687394db77db08b7e79dacb9c0194b4fb9b28e5c62220cb0a571ce043e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761993219029027280,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973c63e287059513809e8ca2a9137cd0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:624bd4fc21c8a04dd1c550689123fdbe90eedd800087e615514693c98342dfc3,PodSandboxId:630273ed62424ade2cfd75fcb2480e14dfbbfc9182b16a8ecf7d053f3b849bc6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Imag
e:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761993218993382217,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 803348f0ad4648836b4395fbe9f96117,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2baabf87531925ad29aa58958a9a82ab31e2b4ea6190e885b3d78bb57edab584,PodSandboxId:15c35f8a35a17a5449614d0ae7360cbcca44145318b3b5
a633a16fe226079ae3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761993195029120676,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-jt729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e8f148-6a9d-427d-a523-68a579131ec6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e318598b1cdc57c0bf3db59c8f6b70bece9cb0fd72f214ac315aaa82863aa6c5,PodSandboxId:3daf63d94c95af4b6168e90e8e3b1e9a2a31691b3e526f5ef65f215b6c1d9c35,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761993193920460830,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4fktf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3154768-c6f0-4b8f-9a95-c6f6fb16dc98,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5664f4cdb8b0cb0cd1151361649408884d21ad30ec13a20d477e8e43b30f8b6c,PodSandboxId:cfe6db88f50c404a911e999b66ef5f1daf00a42a24c744481c0560f0e1975d3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761993193998332010,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f25a81ea579abde52be137a9a148994e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"
name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fec3be4d90e193581410f134f96ec6279009038eee2bdbe11637a437d72eaf0,PodSandboxId:4674f9583203380588e79c6f84e891e2eb0eb002a292d75a84206888895d67a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761993193861011137,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949ecae0312f4c3b69405f37e19e08f8,},
Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78b1529a9d2203abffb52d97d6edf5d03eb7c066c60de924bf3e83b04e830284,PodSandboxId:630273ed62424ade2cfd75fcb2480e14dfbbfc9182b16a8ecf7d053f3b849bc6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761993193835506367,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-876158,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 803348f0ad4648836b4395fbe9f96117,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79aebb8f8da407f0f84efde4b63cf2e47e9eb50d7ced3ee123d1953b0dbd8510,PodSandboxId:c17725687394db77db08b7e79dacb9c0194b4fb9b28e5c62220cb0a571ce043e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761993193781180020,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973c63e287059513809e8ca2a9137cd0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4bf4266b-34a4-4d97-8649-dbd654e6e5fa name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:34:05 pause-876158 crio[2790]: time="2025-11-01 10:34:05.000141878Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=15daefa1-70ba-4f75-aaa2-1c6696c0b154 name=/runtime.v1.RuntimeService/Version
	Nov 01 10:34:05 pause-876158 crio[2790]: time="2025-11-01 10:34:05.000235611Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=15daefa1-70ba-4f75-aaa2-1c6696c0b154 name=/runtime.v1.RuntimeService/Version
	Nov 01 10:34:05 pause-876158 crio[2790]: time="2025-11-01 10:34:05.002702487Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=62eebed5-c1ef-4ff1-b856-5bac8361a525 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:34:05 pause-876158 crio[2790]: time="2025-11-01 10:34:05.003172829Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761993245003143396,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=62eebed5-c1ef-4ff1-b856-5bac8361a525 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:34:05 pause-876158 crio[2790]: time="2025-11-01 10:34:05.003867291Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=31831192-de18-40d1-b286-948995ee3b3d name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:34:05 pause-876158 crio[2790]: time="2025-11-01 10:34:05.003926466Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=31831192-de18-40d1-b286-948995ee3b3d name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:34:05 pause-876158 crio[2790]: time="2025-11-01 10:34:05.004202505Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:265c57d85cf516e7f15b9f4718451a80368f2803140064598a5700c7c6183070,PodSandboxId:3daf63d94c95af4b6168e90e8e3b1e9a2a31691b3e526f5ef65f215b6c1d9c35,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761993224833858308,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4fktf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3154768-c6f0-4b8f-9a95-c6f6fb16dc98,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c48282534e030364f2f8c5f1cded5e59cd875050e613ba6d0bf233bb1692286c,PodSandboxId:15c35f8a35a17a5449614d0ae7360cbcca44145318b3b5a633a16fe226079ae3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761993224825434388,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-jt729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e8f148-6a9d-427d-a523-68a579131ec6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f171ed0fbd1f5bf2ae64c6e5b4951bdf2e68b0fc7c1606c22c3c9e7533f915b0,PodSandboxId:cfe6db88f50c404a911e999b66ef5f1daf00a42a24c744481c0560f0e1975d3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761993219073231006,Labels:map[string]string{io.kubernetes.
container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f25a81ea579abde52be137a9a148994e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:688c471e89968dc1bdbbf0813fa854bc949909909d1c810fd980416e633315dc,PodSandboxId:4674f9583203380588e79c6f84e891e2eb0eb002a292d75a84206888895d67a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108
c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761993219035168734,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949ecae0312f4c3b69405f37e19e08f8,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c71a895a06e893a8df5e00f8b827024417d6f59c1ce8d00280754ceff495603,PodSandboxId:c17725687394db77db08b7e79dacb9c0194b4fb9b28e5c62220cb0a571ce043e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761993219029027280,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973c63e287059513809e8ca2a9137cd0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:624bd4fc21c8a04dd1c550689123fdbe90eedd800087e615514693c98342dfc3,PodSandboxId:630273ed62424ade2cfd75fcb2480e14dfbbfc9182b16a8ecf7d053f3b849bc6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Imag
e:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761993218993382217,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 803348f0ad4648836b4395fbe9f96117,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2baabf87531925ad29aa58958a9a82ab31e2b4ea6190e885b3d78bb57edab584,PodSandboxId:15c35f8a35a17a5449614d0ae7360cbcca44145318b3b5
a633a16fe226079ae3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761993195029120676,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-jt729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e8f148-6a9d-427d-a523-68a579131ec6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e318598b1cdc57c0bf3db59c8f6b70bece9cb0fd72f214ac315aaa82863aa6c5,PodSandboxId:3daf63d94c95af4b6168e90e8e3b1e9a2a31691b3e526f5ef65f215b6c1d9c35,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761993193920460830,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4fktf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3154768-c6f0-4b8f-9a95-c6f6fb16dc98,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5664f4cdb8b0cb0cd1151361649408884d21ad30ec13a20d477e8e43b30f8b6c,PodSandboxId:cfe6db88f50c404a911e999b66ef5f1daf00a42a24c744481c0560f0e1975d3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761993193998332010,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f25a81ea579abde52be137a9a148994e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"
name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fec3be4d90e193581410f134f96ec6279009038eee2bdbe11637a437d72eaf0,PodSandboxId:4674f9583203380588e79c6f84e891e2eb0eb002a292d75a84206888895d67a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761993193861011137,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949ecae0312f4c3b69405f37e19e08f8,},
Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78b1529a9d2203abffb52d97d6edf5d03eb7c066c60de924bf3e83b04e830284,PodSandboxId:630273ed62424ade2cfd75fcb2480e14dfbbfc9182b16a8ecf7d053f3b849bc6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761993193835506367,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-876158,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 803348f0ad4648836b4395fbe9f96117,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79aebb8f8da407f0f84efde4b63cf2e47e9eb50d7ced3ee123d1953b0dbd8510,PodSandboxId:c17725687394db77db08b7e79dacb9c0194b4fb9b28e5c62220cb0a571ce043e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761993193781180020,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973c63e287059513809e8ca2a9137cd0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=31831192-de18-40d1-b286-948995ee3b3d name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:34:05 pause-876158 crio[2790]: time="2025-11-01 10:34:05.059469829Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f70873c1-3a07-4f1c-92a0-49928e4d6504 name=/runtime.v1.RuntimeService/Version
	Nov 01 10:34:05 pause-876158 crio[2790]: time="2025-11-01 10:34:05.059555910Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f70873c1-3a07-4f1c-92a0-49928e4d6504 name=/runtime.v1.RuntimeService/Version
	Nov 01 10:34:05 pause-876158 crio[2790]: time="2025-11-01 10:34:05.061142956Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b9967d2d-dc22-48aa-8d51-c34896f2f135 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:34:05 pause-876158 crio[2790]: time="2025-11-01 10:34:05.061560001Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761993245061535256,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b9967d2d-dc22-48aa-8d51-c34896f2f135 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:34:05 pause-876158 crio[2790]: time="2025-11-01 10:34:05.062390309Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=54c57e09-5129-4c05-98a5-a44f3ae80a2c name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:34:05 pause-876158 crio[2790]: time="2025-11-01 10:34:05.062448025Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=54c57e09-5129-4c05-98a5-a44f3ae80a2c name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:34:05 pause-876158 crio[2790]: time="2025-11-01 10:34:05.063329856Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:265c57d85cf516e7f15b9f4718451a80368f2803140064598a5700c7c6183070,PodSandboxId:3daf63d94c95af4b6168e90e8e3b1e9a2a31691b3e526f5ef65f215b6c1d9c35,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761993224833858308,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4fktf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3154768-c6f0-4b8f-9a95-c6f6fb16dc98,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c48282534e030364f2f8c5f1cded5e59cd875050e613ba6d0bf233bb1692286c,PodSandboxId:15c35f8a35a17a5449614d0ae7360cbcca44145318b3b5a633a16fe226079ae3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761993224825434388,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-jt729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e8f148-6a9d-427d-a523-68a579131ec6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f171ed0fbd1f5bf2ae64c6e5b4951bdf2e68b0fc7c1606c22c3c9e7533f915b0,PodSandboxId:cfe6db88f50c404a911e999b66ef5f1daf00a42a24c744481c0560f0e1975d3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761993219073231006,Labels:map[string]string{io.kubernetes.
container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f25a81ea579abde52be137a9a148994e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:688c471e89968dc1bdbbf0813fa854bc949909909d1c810fd980416e633315dc,PodSandboxId:4674f9583203380588e79c6f84e891e2eb0eb002a292d75a84206888895d67a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108
c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761993219035168734,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949ecae0312f4c3b69405f37e19e08f8,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c71a895a06e893a8df5e00f8b827024417d6f59c1ce8d00280754ceff495603,PodSandboxId:c17725687394db77db08b7e79dacb9c0194b4fb9b28e5c62220cb0a571ce043e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761993219029027280,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973c63e287059513809e8ca2a9137cd0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:624bd4fc21c8a04dd1c550689123fdbe90eedd800087e615514693c98342dfc3,PodSandboxId:630273ed62424ade2cfd75fcb2480e14dfbbfc9182b16a8ecf7d053f3b849bc6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Imag
e:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761993218993382217,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 803348f0ad4648836b4395fbe9f96117,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2baabf87531925ad29aa58958a9a82ab31e2b4ea6190e885b3d78bb57edab584,PodSandboxId:15c35f8a35a17a5449614d0ae7360cbcca44145318b3b5
a633a16fe226079ae3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761993195029120676,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-jt729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e8f148-6a9d-427d-a523-68a579131ec6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e318598b1cdc57c0bf3db59c8f6b70bece9cb0fd72f214ac315aaa82863aa6c5,PodSandboxId:3daf63d94c95af4b6168e90e8e3b1e9a2a31691b3e526f5ef65f215b6c1d9c35,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761993193920460830,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4fktf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3154768-c6f0-4b8f-9a95-c6f6fb16dc98,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5664f4cdb8b0cb0cd1151361649408884d21ad30ec13a20d477e8e43b30f8b6c,PodSandboxId:cfe6db88f50c404a911e999b66ef5f1daf00a42a24c744481c0560f0e1975d3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761993193998332010,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f25a81ea579abde52be137a9a148994e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"
name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fec3be4d90e193581410f134f96ec6279009038eee2bdbe11637a437d72eaf0,PodSandboxId:4674f9583203380588e79c6f84e891e2eb0eb002a292d75a84206888895d67a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761993193861011137,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949ecae0312f4c3b69405f37e19e08f8,},
Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78b1529a9d2203abffb52d97d6edf5d03eb7c066c60de924bf3e83b04e830284,PodSandboxId:630273ed62424ade2cfd75fcb2480e14dfbbfc9182b16a8ecf7d053f3b849bc6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761993193835506367,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-876158,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 803348f0ad4648836b4395fbe9f96117,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79aebb8f8da407f0f84efde4b63cf2e47e9eb50d7ced3ee123d1953b0dbd8510,PodSandboxId:c17725687394db77db08b7e79dacb9c0194b4fb9b28e5c62220cb0a571ce043e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761993193781180020,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973c63e287059513809e8ca2a9137cd0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=54c57e09-5129-4c05-98a5-a44f3ae80a2c name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:34:05 pause-876158 crio[2790]: time="2025-11-01 10:34:05.119183076Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=be326796-3a02-42aa-9c34-34814d2fa094 name=/runtime.v1.RuntimeService/Version
	Nov 01 10:34:05 pause-876158 crio[2790]: time="2025-11-01 10:34:05.119319632Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=be326796-3a02-42aa-9c34-34814d2fa094 name=/runtime.v1.RuntimeService/Version
	Nov 01 10:34:05 pause-876158 crio[2790]: time="2025-11-01 10:34:05.122233557Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fe9db348-b695-4b03-bd49-94f143a8d666 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:34:05 pause-876158 crio[2790]: time="2025-11-01 10:34:05.122906982Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761993245122863774,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fe9db348-b695-4b03-bd49-94f143a8d666 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 10:34:05 pause-876158 crio[2790]: time="2025-11-01 10:34:05.124072945Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d9c67eb9-ec26-4a17-9645-4be06240f6d3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:34:05 pause-876158 crio[2790]: time="2025-11-01 10:34:05.124404461Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d9c67eb9-ec26-4a17-9645-4be06240f6d3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 10:34:05 pause-876158 crio[2790]: time="2025-11-01 10:34:05.125169579Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:265c57d85cf516e7f15b9f4718451a80368f2803140064598a5700c7c6183070,PodSandboxId:3daf63d94c95af4b6168e90e8e3b1e9a2a31691b3e526f5ef65f215b6c1d9c35,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761993224833858308,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4fktf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3154768-c6f0-4b8f-9a95-c6f6fb16dc98,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c48282534e030364f2f8c5f1cded5e59cd875050e613ba6d0bf233bb1692286c,PodSandboxId:15c35f8a35a17a5449614d0ae7360cbcca44145318b3b5a633a16fe226079ae3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761993224825434388,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-jt729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e8f148-6a9d-427d-a523-68a579131ec6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f171ed0fbd1f5bf2ae64c6e5b4951bdf2e68b0fc7c1606c22c3c9e7533f915b0,PodSandboxId:cfe6db88f50c404a911e999b66ef5f1daf00a42a24c744481c0560f0e1975d3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761993219073231006,Labels:map[string]string{io.kubernetes.
container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f25a81ea579abde52be137a9a148994e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:688c471e89968dc1bdbbf0813fa854bc949909909d1c810fd980416e633315dc,PodSandboxId:4674f9583203380588e79c6f84e891e2eb0eb002a292d75a84206888895d67a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108
c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761993219035168734,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949ecae0312f4c3b69405f37e19e08f8,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c71a895a06e893a8df5e00f8b827024417d6f59c1ce8d00280754ceff495603,PodSandboxId:c17725687394db77db08b7e79dacb9c0194b4fb9b28e5c62220cb0a571ce043e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761993219029027280,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973c63e287059513809e8ca2a9137cd0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:624bd4fc21c8a04dd1c550689123fdbe90eedd800087e615514693c98342dfc3,PodSandboxId:630273ed62424ade2cfd75fcb2480e14dfbbfc9182b16a8ecf7d053f3b849bc6,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Imag
e:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761993218993382217,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 803348f0ad4648836b4395fbe9f96117,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2baabf87531925ad29aa58958a9a82ab31e2b4ea6190e885b3d78bb57edab584,PodSandboxId:15c35f8a35a17a5449614d0ae7360cbcca44145318b3b5
a633a16fe226079ae3,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761993195029120676,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-jt729,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92e8f148-6a9d-427d-a523-68a579131ec6,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubern
etes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e318598b1cdc57c0bf3db59c8f6b70bece9cb0fd72f214ac315aaa82863aa6c5,PodSandboxId:3daf63d94c95af4b6168e90e8e3b1e9a2a31691b3e526f5ef65f215b6c1d9c35,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761993193920460830,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4fktf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3154768-c6f0-4b8f-9a95-c6f6fb16dc98,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.
kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5664f4cdb8b0cb0cd1151361649408884d21ad30ec13a20d477e8e43b30f8b6c,PodSandboxId:cfe6db88f50c404a911e999b66ef5f1daf00a42a24c744481c0560f0e1975d3e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761993193998332010,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f25a81ea579abde52be137a9a148994e,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"
name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2fec3be4d90e193581410f134f96ec6279009038eee2bdbe11637a437d72eaf0,PodSandboxId:4674f9583203380588e79c6f84e891e2eb0eb002a292d75a84206888895d67a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761993193861011137,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 949ecae0312f4c3b69405f37e19e08f8,},
Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78b1529a9d2203abffb52d97d6edf5d03eb7c066c60de924bf3e83b04e830284,PodSandboxId:630273ed62424ade2cfd75fcb2480e14dfbbfc9182b16a8ecf7d053f3b849bc6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761993193835506367,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-876158,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 803348f0ad4648836b4395fbe9f96117,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79aebb8f8da407f0f84efde4b63cf2e47e9eb50d7ced3ee123d1953b0dbd8510,PodSandboxId:c17725687394db77db08b7e79dacb9c0194b4fb9b28e5c62220cb0a571ce043e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761993193781180020,Labels:map[string]string{io.kubernetes.containe
r.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-876158,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 973c63e287059513809e8ca2a9137cd0,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d9c67eb9-ec26-4a17-9645-4be06240f6d3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	265c57d85cf51       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   20 seconds ago      Running             kube-proxy                2                   3daf63d94c95a       kube-proxy-4fktf
	c48282534e030       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   20 seconds ago      Running             coredns                   2                   15c35f8a35a17       coredns-66bc5c9577-jt729
	f171ed0fbd1f5       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   26 seconds ago      Running             kube-controller-manager   2                   cfe6db88f50c4       kube-controller-manager-pause-876158
	688c471e89968       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   26 seconds ago      Running             kube-apiserver            2                   4674f95832033       kube-apiserver-pause-876158
	5c71a895a06e8       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   26 seconds ago      Running             kube-scheduler            2                   c17725687394d       kube-scheduler-pause-876158
	624bd4fc21c8a       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   26 seconds ago      Running             etcd                      2                   630273ed62424       etcd-pause-876158
	2baabf8753192       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   50 seconds ago      Exited              coredns                   1                   15c35f8a35a17       coredns-66bc5c9577-jt729
	5664f4cdb8b0c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   51 seconds ago      Exited              kube-controller-manager   1                   cfe6db88f50c4       kube-controller-manager-pause-876158
	e318598b1cdc5       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   51 seconds ago      Exited              kube-proxy                1                   3daf63d94c95a       kube-proxy-4fktf
	2fec3be4d90e1       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   51 seconds ago      Exited              kube-apiserver            1                   4674f95832033       kube-apiserver-pause-876158
	78b1529a9d220       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   51 seconds ago      Exited              etcd                      1                   630273ed62424       etcd-pause-876158
	79aebb8f8da40       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   51 seconds ago      Exited              kube-scheduler            1                   c17725687394d       kube-scheduler-pause-876158
	
	
	==> coredns [2baabf87531925ad29aa58958a9a82ab31e2b4ea6190e885b3d78bb57edab584] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:53276 - 56786 "HINFO IN 7355434863311998644.4232500489853051955. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.081693452s
	
	
	==> coredns [c48282534e030364f2f8c5f1cded5e59cd875050e613ba6d0bf233bb1692286c] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52701 - 33642 "HINFO IN 1384503627503618802.3154399698577371114. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.072873502s
	
	
	==> describe nodes <==
	Name:               pause-876158
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-876158
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d
	                    minikube.k8s.io/name=pause-876158
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T10_32_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 10:31:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-876158
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 10:34:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 10:33:43 +0000   Sat, 01 Nov 2025 10:31:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 10:33:43 +0000   Sat, 01 Nov 2025 10:31:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 10:33:43 +0000   Sat, 01 Nov 2025 10:31:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 10:33:43 +0000   Sat, 01 Nov 2025 10:32:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.174
	  Hostname:    pause-876158
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 654173f0fde3400092ffb627889717a5
	  System UUID:                654173f0-fde3-4000-92ff-b627889717a5
	  Boot ID:                    e460a208-5d5f-4be6-a9cf-9a4098dfa869
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-jt729                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     2m
	  kube-system                 etcd-pause-876158                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         2m7s
	  kube-system                 kube-apiserver-pause-876158             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-controller-manager-pause-876158    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-proxy-4fktf                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-scheduler-pause-876158             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 118s               kube-proxy       
	  Normal  Starting                 20s                kube-proxy       
	  Normal  Starting                 47s                kube-proxy       
	  Normal  Starting                 2m5s               kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m5s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m5s               kubelet          Node pause-876158 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s               kubelet          Node pause-876158 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m5s               kubelet          Node pause-876158 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m4s               kubelet          Node pause-876158 status is now: NodeReady
	  Normal  RegisteredNode           2m1s               node-controller  Node pause-876158 event: Registered Node pause-876158 in Controller
	  Normal  RegisteredNode           44s                node-controller  Node pause-876158 event: Registered Node pause-876158 in Controller
	  Normal  Starting                 27s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27s (x8 over 27s)  kubelet          Node pause-876158 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27s (x8 over 27s)  kubelet          Node pause-876158 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27s (x7 over 27s)  kubelet          Node pause-876158 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18s                node-controller  Node pause-876158 event: Registered Node pause-876158 in Controller
	
	
	==> dmesg <==
	[Nov 1 10:31] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001387] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.005919] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.505794] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000026] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.090191] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.137646] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.100240] kauditd_printk_skb: 18 callbacks suppressed
	[Nov 1 10:32] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.005376] kauditd_printk_skb: 18 callbacks suppressed
	[ +10.779521] kauditd_printk_skb: 219 callbacks suppressed
	[ +23.896833] kauditd_printk_skb: 38 callbacks suppressed
	[Nov 1 10:33] kauditd_printk_skb: 297 callbacks suppressed
	[  +4.026798] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.138217] kauditd_printk_skb: 15 callbacks suppressed
	[  +6.579245] kauditd_printk_skb: 81 callbacks suppressed
	[  +6.928923] kauditd_printk_skb: 32 callbacks suppressed
	
	
	==> etcd [624bd4fc21c8a04dd1c550689123fdbe90eedd800087e615514693c98342dfc3] <==
	{"level":"warn","ts":"2025-11-01T10:33:42.682343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:33:42.710187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:33:42.720237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:33:42.746278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:33:42.753197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:33:42.770973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:33:42.805388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:33:42.868912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T10:33:45.716108Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"171.919017ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/default/kubernetes\" limit:1 ","response":"range_response_count:1 size:368"}
	{"level":"info","ts":"2025-11-01T10:33:45.716200Z","caller":"traceutil/trace.go:172","msg":"trace[576577424] range","detail":"{range_begin:/registry/services/endpoints/default/kubernetes; range_end:; response_count:1; response_revision:548; }","duration":"172.076412ms","start":"2025-11-01T10:33:45.544106Z","end":"2025-11-01T10:33:45.716183Z","steps":["trace[576577424] 'agreement among raft nodes before linearized reading'  (duration: 61.592411ms)","trace[576577424] 'range keys from in-memory index tree'  (duration: 110.252488ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T10:33:45.718411Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.742691ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6141109162183821431 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/pause-876158.1873db80c7f4709d\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-876158.1873db80c7f4709d\" value_size:462 lease:6141109162183821428 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-01T10:33:45.718734Z","caller":"traceutil/trace.go:172","msg":"trace[1439771260] transaction","detail":"{read_only:false; response_revision:549; number_of_response:1; }","duration":"181.372818ms","start":"2025-11-01T10:33:45.537344Z","end":"2025-11-01T10:33:45.718717Z","steps":["trace[1439771260] 'process raft request'  (duration: 68.393524ms)","trace[1439771260] 'compare'  (duration: 110.000166ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T10:33:45.921523Z","caller":"traceutil/trace.go:172","msg":"trace[312665755] linearizableReadLoop","detail":"{readStateIndex:591; appliedIndex:591; }","duration":"190.025505ms","start":"2025-11-01T10:33:45.731482Z","end":"2025-11-01T10:33:45.921507Z","steps":["trace[312665755] 'read index received'  (duration: 190.020736ms)","trace[312665755] 'applied index is now lower than readState.Index'  (duration: 4.119µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T10:33:45.973324Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"241.772675ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/node-controller\" limit:1 ","response":"range_response_count:1 size:195"}
	{"level":"info","ts":"2025-11-01T10:33:45.973418Z","caller":"traceutil/trace.go:172","msg":"trace[640122231] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/node-controller; range_end:; response_count:1; response_revision:549; }","duration":"241.922675ms","start":"2025-11-01T10:33:45.731478Z","end":"2025-11-01T10:33:45.973401Z","steps":["trace[640122231] 'agreement among raft nodes before linearized reading'  (duration: 190.163526ms)","trace[640122231] 'range keys from in-memory index tree'  (duration: 51.521809ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T10:33:45.973683Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"194.635669ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T10:33:45.973769Z","caller":"traceutil/trace.go:172","msg":"trace[1396806250] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:550; }","duration":"194.720967ms","start":"2025-11-01T10:33:45.779027Z","end":"2025-11-01T10:33:45.973748Z","steps":["trace[1396806250] 'agreement among raft nodes before linearized reading'  (duration: 194.523999ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:33:45.973883Z","caller":"traceutil/trace.go:172","msg":"trace[1512195498] transaction","detail":"{read_only:false; response_revision:550; number_of_response:1; }","duration":"244.472618ms","start":"2025-11-01T10:33:45.729382Z","end":"2025-11-01T10:33:45.973854Z","steps":["trace[1512195498] 'process raft request'  (duration: 192.320924ms)","trace[1512195498] 'compare'  (duration: 51.34993ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T10:33:45.974062Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"177.861616ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-jt729\" limit:1 ","response":"range_response_count:1 size:5731"}
	{"level":"info","ts":"2025-11-01T10:33:45.974092Z","caller":"traceutil/trace.go:172","msg":"trace[1271903266] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-jt729; range_end:; response_count:1; response_revision:550; }","duration":"177.899961ms","start":"2025-11-01T10:33:45.796184Z","end":"2025-11-01T10:33:45.974084Z","steps":["trace[1271903266] 'agreement among raft nodes before linearized reading'  (duration: 177.765699ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:33:46.127007Z","caller":"traceutil/trace.go:172","msg":"trace[1955359810] linearizableReadLoop","detail":"{readStateIndex:592; appliedIndex:592; }","duration":"126.987794ms","start":"2025-11-01T10:33:45.999972Z","end":"2025-11-01T10:33:46.126960Z","steps":["trace[1955359810] 'read index received'  (duration: 126.984094ms)","trace[1955359810] 'applied index is now lower than readState.Index'  (duration: 3.283µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T10:33:46.127977Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"127.98992ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T10:33:46.128015Z","caller":"traceutil/trace.go:172","msg":"trace[1801600657] range","detail":"{range_begin:/registry/minions; range_end:; response_count:0; response_revision:550; }","duration":"128.041499ms","start":"2025-11-01T10:33:45.999965Z","end":"2025-11-01T10:33:46.128007Z","steps":["trace[1801600657] 'agreement among raft nodes before linearized reading'  (duration: 127.141275ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:33:46.128286Z","caller":"traceutil/trace.go:172","msg":"trace[890136585] transaction","detail":"{read_only:false; response_revision:552; number_of_response:1; }","duration":"139.456187ms","start":"2025-11-01T10:33:45.988821Z","end":"2025-11-01T10:33:46.128277Z","steps":["trace[890136585] 'process raft request'  (duration: 139.265035ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:33:46.128411Z","caller":"traceutil/trace.go:172","msg":"trace[434859032] transaction","detail":"{read_only:false; response_revision:551; number_of_response:1; }","duration":"140.932575ms","start":"2025-11-01T10:33:45.987469Z","end":"2025-11-01T10:33:46.128401Z","steps":["trace[434859032] 'process raft request'  (duration: 139.479553ms)"],"step_count":1}
	
	
	==> etcd [78b1529a9d2203abffb52d97d6edf5d03eb7c066c60de924bf3e83b04e830284] <==
	{"level":"warn","ts":"2025-11-01T10:33:18.907799Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"125.634115ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6141109162177553293 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/coredns-66bc5c9577-jt729\" mod_revision:402 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-66bc5c9577-jt729\" value_size:5658 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-66bc5c9577-jt729\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-01T10:33:18.907824Z","caller":"traceutil/trace.go:172","msg":"trace[664005657] range","detail":"{range_begin:/registry/clusterroles; range_end:; response_count:0; response_revision:418; }","duration":"184.906584ms","start":"2025-11-01T10:33:18.722883Z","end":"2025-11-01T10:33:18.907790Z","steps":["trace[664005657] 'agreement among raft nodes before linearized reading'  (duration: 59.192665ms)","trace[664005657] 'range keys from in-memory index tree'  (duration: 125.597707ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T10:33:18.907851Z","caller":"traceutil/trace.go:172","msg":"trace[1906371261] linearizableReadLoop","detail":"{readStateIndex:446; appliedIndex:445; }","duration":"125.798541ms","start":"2025-11-01T10:33:18.782046Z","end":"2025-11-01T10:33:18.907845Z","steps":["trace[1906371261] 'read index received'  (duration: 16.776µs)","trace[1906371261] 'applied index is now lower than readState.Index'  (duration: 125.781295ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T10:33:18.907954Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"184.401506ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/system-node-critical\" limit:1 ","response":"range_response_count:1 size:442"}
	{"level":"info","ts":"2025-11-01T10:33:18.907973Z","caller":"traceutil/trace.go:172","msg":"trace[1666495431] range","detail":"{range_begin:/registry/priorityclasses/system-node-critical; range_end:; response_count:1; response_revision:419; }","duration":"184.421085ms","start":"2025-11-01T10:33:18.723545Z","end":"2025-11-01T10:33:18.907966Z","steps":["trace[1666495431] 'agreement among raft nodes before linearized reading'  (duration: 184.320979ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T10:33:18.908208Z","caller":"traceutil/trace.go:172","msg":"trace[561682606] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"329.28051ms","start":"2025-11-01T10:33:18.578898Z","end":"2025-11-01T10:33:18.908178Z","steps":["trace[561682606] 'process raft request'  (duration: 203.219701ms)","trace[561682606] 'compare'  (duration: 125.541092ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T10:33:18.908339Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T10:33:18.578879Z","time spent":"329.410527ms","remote":"127.0.0.1:35428","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5717,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/coredns-66bc5c9577-jt729\" mod_revision:402 > success:<request_put:<key:\"/registry/pods/kube-system/coredns-66bc5c9577-jt729\" value_size:5658 >> failure:<request_range:<key:\"/registry/pods/kube-system/coredns-66bc5c9577-jt729\" > >"}
	{"level":"info","ts":"2025-11-01T10:33:36.030188Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-01T10:33:36.030273Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-876158","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.174:2380"],"advertise-client-urls":["https://192.168.72.174:2379"]}
	{"level":"error","ts":"2025-11-01T10:33:36.030364Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T10:33:36.032079Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T10:33:36.032140Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:33:36.032165Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"6939dc92cf6d5539","current-leader-member-id":"6939dc92cf6d5539"}
	{"level":"info","ts":"2025-11-01T10:33:36.032249Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-01T10:33:36.032293Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T10:33:36.032364Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T10:33:36.032370Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:33:36.032366Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-01T10:33:36.032409Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.72.174:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T10:33:36.032416Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.72.174:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T10:33:36.032421Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.72.174:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:33:36.036395Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.72.174:2380"}
	{"level":"info","ts":"2025-11-01T10:33:36.036639Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.72.174:2380"}
	{"level":"error","ts":"2025-11-01T10:33:36.036561Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.72.174:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T10:33:36.036725Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-876158","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.72.174:2380"],"advertise-client-urls":["https://192.168.72.174:2379"]}
	
	
	==> kernel <==
	 10:34:05 up 2 min,  0 users,  load average: 1.09, 0.50, 0.19
	Linux pause-876158 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Oct 28 16:58:05 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [2fec3be4d90e193581410f134f96ec6279009038eee2bdbe11637a437d72eaf0] <==
	I1101 10:33:25.956422       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I1101 10:33:25.956428       1 controller.go:132] Ending legacy_token_tracking_controller
	I1101 10:33:25.956431       1 controller.go:133] Shutting down legacy_token_tracking_controller
	I1101 10:33:25.956441       1 customresource_discovery_controller.go:332] Shutting down DiscoveryController
	I1101 10:33:25.956986       1 dynamic_cafile_content.go:175] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1101 10:33:25.957222       1 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1101 10:33:25.957934       1 dynamic_serving_content.go:149] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I1101 10:33:25.957991       1 dynamic_cafile_content.go:175] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1101 10:33:25.958032       1 object_count_tracker.go:141] "StorageObjectCountTracker pruner is exiting"
	I1101 10:33:25.957940       1 local_available_controller.go:172] Shutting down LocalAvailability controller
	I1101 10:33:25.959243       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1101 10:33:25.959303       1 secure_serving.go:259] Stopped listening on [::]:8443
	I1101 10:33:25.959507       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I1101 10:33:25.960051       1 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1101 10:33:25.957950       1 system_namespaces_controller.go:76] Shutting down system namespaces controller
	I1101 10:33:25.958009       1 controller.go:84] Shutting down OpenAPI AggregationController
	I1101 10:33:25.956002       1 controller.go:157] Shutting down quota evaluator
	I1101 10:33:25.960342       1 controller.go:176] quota evaluator worker shutdown
	I1101 10:33:25.958134       1 cluster_authentication_trust_controller.go:482] Shutting down cluster_authentication_trust_controller controller
	I1101 10:33:25.960363       1 controller.go:176] quota evaluator worker shutdown
	I1101 10:33:25.960369       1 controller.go:176] quota evaluator worker shutdown
	I1101 10:33:25.960375       1 controller.go:176] quota evaluator worker shutdown
	I1101 10:33:25.960381       1 controller.go:176] quota evaluator worker shutdown
	I1101 10:33:25.958145       1 crd_finalizer.go:281] Shutting down CRDFinalizer
	I1101 10:33:25.959161       1 repairip.go:246] Shutting down ipallocator-repair-controller
	
	
	==> kube-apiserver [688c471e89968dc1bdbbf0813fa854bc949909909d1c810fd980416e633315dc] <==
	I1101 10:33:43.724726       1 aggregator.go:171] initial CRD sync complete...
	I1101 10:33:43.724789       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 10:33:43.724808       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 10:33:43.724823       1 cache.go:39] Caches are synced for autoregister controller
	I1101 10:33:43.749660       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 10:33:43.752144       1 policy_source.go:240] refreshing policies
	I1101 10:33:43.784018       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 10:33:43.787865       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 10:33:43.789012       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 10:33:43.789185       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 10:33:43.789353       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 10:33:43.789410       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 10:33:43.795396       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 10:33:43.802306       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 10:33:43.840408       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 10:33:44.557039       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 10:33:44.590963       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1101 10:33:45.722906       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.72.174]
	I1101 10:33:45.726997       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 10:33:45.986689       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 10:33:46.308740       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 10:33:46.382390       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 10:33:46.446412       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 10:33:46.458300       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 10:33:53.553203       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [5664f4cdb8b0cb0cd1151361649408884d21ad30ec13a20d477e8e43b30f8b6c] <==
	I1101 10:33:21.203539       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 10:33:21.204485       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:33:21.207739       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 10:33:21.219775       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 10:33:21.220057       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 10:33:21.221732       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 10:33:21.222122       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 10:33:21.222207       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 10:33:21.224255       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1101 10:33:21.228182       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 10:33:21.230576       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 10:33:21.231930       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1101 10:33:21.232009       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 10:33:21.235724       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 10:33:21.237960       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:33:21.239178       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 10:33:21.248462       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 10:33:21.254704       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 10:33:21.259039       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 10:33:21.268054       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 10:33:21.269520       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 10:33:21.269688       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:33:21.269704       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:33:21.269743       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:33:21.313541       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [f171ed0fbd1f5bf2ae64c6e5b4951bdf2e68b0fc7c1606c22c3c9e7533f915b0] <==
	I1101 10:33:47.441839       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 10:33:47.441913       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 10:33:47.441948       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 10:33:47.441952       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 10:33:47.441957       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 10:33:47.444576       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 10:33:47.448022       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1101 10:33:47.448088       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1101 10:33:47.448273       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 10:33:47.448298       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 10:33:47.448308       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 10:33:47.448315       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 10:33:47.450309       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 10:33:47.450448       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 10:33:47.450736       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 10:33:47.450840       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-876158"
	I1101 10:33:47.450886       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 10:33:47.451782       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 10:33:47.451967       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 10:33:47.457562       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 10:33:47.457659       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 10:33:47.458761       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 10:33:47.461519       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 10:33:47.463914       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 10:33:47.467247       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	
	
	==> kube-proxy [265c57d85cf516e7f15b9f4718451a80368f2803140064598a5700c7c6183070] <==
	I1101 10:33:45.224393       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:33:45.325788       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:33:45.325927       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.72.174"]
	E1101 10:33:45.326079       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:33:45.375068       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1101 10:33:45.375150       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 10:33:45.375183       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:33:45.387098       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:33:45.387468       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:33:45.387487       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:33:45.393687       1 config.go:200] "Starting service config controller"
	I1101 10:33:45.393737       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:33:45.393760       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:33:45.393764       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:33:45.393774       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:33:45.393777       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:33:45.395774       1 config.go:309] "Starting node config controller"
	I1101 10:33:45.395805       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:33:45.395812       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:33:45.493878       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:33:45.493921       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:33:45.493939       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [e318598b1cdc57c0bf3db59c8f6b70bece9cb0fd72f214ac315aaa82863aa6c5] <==
	I1101 10:33:16.352036       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 10:33:17.852716       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 10:33:17.852827       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.72.174"]
	E1101 10:33:17.853144       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 10:33:17.897871       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1101 10:33:17.897947       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 10:33:17.897969       1 server_linux.go:132] "Using iptables Proxier"
	I1101 10:33:17.914216       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 10:33:17.915383       1 server.go:527] "Version info" version="v1.34.1"
	I1101 10:33:17.915424       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:33:17.929768       1 config.go:200] "Starting service config controller"
	I1101 10:33:17.929799       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 10:33:17.929817       1 config.go:106] "Starting endpoint slice config controller"
	I1101 10:33:17.929821       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 10:33:17.929868       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 10:33:17.929891       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 10:33:17.932367       1 config.go:309] "Starting node config controller"
	I1101 10:33:17.932396       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 10:33:17.932404       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 10:33:18.030651       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 10:33:18.030669       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 10:33:18.030683       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5c71a895a06e893a8df5e00f8b827024417d6f59c1ce8d00280754ceff495603] <==
	I1101 10:33:39.937174       1 serving.go:386] Generated self-signed cert in-memory
	W1101 10:33:43.665909       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 10:33:43.666674       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 10:33:43.666769       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 10:33:43.666801       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 10:33:43.746264       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:33:43.746311       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:33:43.748936       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:33:43.748968       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:33:43.749149       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:33:43.749200       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:33:43.851096       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [79aebb8f8da407f0f84efde4b63cf2e47e9eb50d7ced3ee123d1953b0dbd8510] <==
	I1101 10:33:15.920213       1 serving.go:386] Generated self-signed cert in-memory
	I1101 10:33:17.931720       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 10:33:17.931758       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 10:33:18.259875       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 10:33:18.259923       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 10:33:18.259977       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:33:18.259984       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:33:18.259995       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:33:18.260002       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:33:18.261484       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 10:33:18.261546       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 10:33:18.360792       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 10:33:18.360896       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 10:33:18.361000       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:33:36.185407       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1101 10:33:36.185509       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1101 10:33:36.185643       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1101 10:33:36.185909       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 10:33:36.186008       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1101 10:33:36.186051       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1101 10:33:36.186111       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 01 10:33:43 pause-876158 kubelet[3927]: I1101 10:33:43.772521    3927 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-876158"
	Nov 01 10:33:43 pause-876158 kubelet[3927]: I1101 10:33:43.791678    3927 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-876158"
	Nov 01 10:33:43 pause-876158 kubelet[3927]: E1101 10:33:43.813567    3927 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-876158\" already exists" pod="kube-system/kube-apiserver-pause-876158"
	Nov 01 10:33:43 pause-876158 kubelet[3927]: E1101 10:33:43.815736    3927 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-876158\" already exists" pod="kube-system/kube-controller-manager-pause-876158"
	Nov 01 10:33:43 pause-876158 kubelet[3927]: I1101 10:33:43.815781    3927 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-876158"
	Nov 01 10:33:43 pause-876158 kubelet[3927]: E1101 10:33:43.834408    3927 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-876158\" already exists" pod="kube-system/kube-scheduler-pause-876158"
	Nov 01 10:33:43 pause-876158 kubelet[3927]: I1101 10:33:43.834472    3927 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-876158"
	Nov 01 10:33:43 pause-876158 kubelet[3927]: I1101 10:33:43.853468    3927 kubelet_node_status.go:124] "Node was previously registered" node="pause-876158"
	Nov 01 10:33:43 pause-876158 kubelet[3927]: I1101 10:33:43.853581    3927 kubelet_node_status.go:78] "Successfully registered node" node="pause-876158"
	Nov 01 10:33:43 pause-876158 kubelet[3927]: I1101 10:33:43.853730    3927 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 01 10:33:43 pause-876158 kubelet[3927]: E1101 10:33:43.858026    3927 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-876158\" already exists" pod="kube-system/etcd-pause-876158"
	Nov 01 10:33:43 pause-876158 kubelet[3927]: I1101 10:33:43.858076    3927 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-876158"
	Nov 01 10:33:43 pause-876158 kubelet[3927]: I1101 10:33:43.859436    3927 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 01 10:33:43 pause-876158 kubelet[3927]: E1101 10:33:43.882836    3927 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-876158\" already exists" pod="kube-system/kube-apiserver-pause-876158"
	Nov 01 10:33:44 pause-876158 kubelet[3927]: I1101 10:33:44.472955    3927 apiserver.go:52] "Watching apiserver"
	Nov 01 10:33:44 pause-876158 kubelet[3927]: I1101 10:33:44.493503    3927 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 01 10:33:44 pause-876158 kubelet[3927]: I1101 10:33:44.548097    3927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a3154768-c6f0-4b8f-9a95-c6f6fb16dc98-xtables-lock\") pod \"kube-proxy-4fktf\" (UID: \"a3154768-c6f0-4b8f-9a95-c6f6fb16dc98\") " pod="kube-system/kube-proxy-4fktf"
	Nov 01 10:33:44 pause-876158 kubelet[3927]: I1101 10:33:44.548221    3927 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3154768-c6f0-4b8f-9a95-c6f6fb16dc98-lib-modules\") pod \"kube-proxy-4fktf\" (UID: \"a3154768-c6f0-4b8f-9a95-c6f6fb16dc98\") " pod="kube-system/kube-proxy-4fktf"
	Nov 01 10:33:44 pause-876158 kubelet[3927]: I1101 10:33:44.778900    3927 scope.go:117] "RemoveContainer" containerID="e318598b1cdc57c0bf3db59c8f6b70bece9cb0fd72f214ac315aaa82863aa6c5"
	Nov 01 10:33:44 pause-876158 kubelet[3927]: I1101 10:33:44.780076    3927 scope.go:117] "RemoveContainer" containerID="2baabf87531925ad29aa58958a9a82ab31e2b4ea6190e885b3d78bb57edab584"
	Nov 01 10:33:48 pause-876158 kubelet[3927]: E1101 10:33:48.640312    3927 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761993228638134044  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Nov 01 10:33:48 pause-876158 kubelet[3927]: E1101 10:33:48.640377    3927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761993228638134044  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Nov 01 10:33:53 pause-876158 kubelet[3927]: I1101 10:33:53.494140    3927 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 01 10:33:58 pause-876158 kubelet[3927]: E1101 10:33:58.644461    3927 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761993238643334442  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Nov 01 10:33:58 pause-876158 kubelet[3927]: E1101 10:33:58.644546    3927 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761993238643334442  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-876158 -n pause-876158
helpers_test.go:269: (dbg) Run:  kubectl --context pause-876158 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (83.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (928.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-543676 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-543676 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: exit status 80 (15m28.373735796s)

                                                
                                                
-- stdout --
	* [calico-543676] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21832
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21832-344560/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-344560/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "calico-543676" primary control-plane node in "calico-543676" cluster
	* Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:41:00.623669  385751 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:41:00.623980  385751 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:41:00.623991  385751 out.go:374] Setting ErrFile to fd 2...
	I1101 10:41:00.623996  385751 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:41:00.624171  385751 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-344560/.minikube/bin
	I1101 10:41:00.624767  385751 out.go:368] Setting JSON to false
	I1101 10:41:00.626251  385751 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8609,"bootTime":1761985052,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:41:00.626383  385751 start.go:143] virtualization: kvm guest
	I1101 10:41:00.628335  385751 out.go:179] * [calico-543676] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:41:00.629517  385751 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 10:41:00.629521  385751 notify.go:221] Checking for updates...
	I1101 10:41:00.632017  385751 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:41:00.633187  385751 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-344560/kubeconfig
	I1101 10:41:00.634475  385751 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-344560/.minikube
	I1101 10:41:00.635624  385751 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:41:00.640072  385751 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:41:00.641839  385751 config.go:182] Loaded profile config "auto-543676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:41:00.641980  385751 config.go:182] Loaded profile config "default-k8s-diff-port-586066": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:41:00.642085  385751 config.go:182] Loaded profile config "guest-651909": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1101 10:41:00.642212  385751 config.go:182] Loaded profile config "kindnet-543676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:41:00.642353  385751 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:41:00.680516  385751 out.go:179] * Using the kvm2 driver based on user configuration
	I1101 10:41:00.681566  385751 start.go:309] selected driver: kvm2
	I1101 10:41:00.681586  385751 start.go:930] validating driver "kvm2" against <nil>
	I1101 10:41:00.681609  385751 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:41:00.682337  385751 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 10:41:00.682639  385751 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 10:41:00.682678  385751 cni.go:84] Creating CNI manager for "calico"
	I1101 10:41:00.682687  385751 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1101 10:41:00.682721  385751 start.go:353] cluster config:
	{Name:calico-543676 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-543676 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1101 10:41:00.682813  385751 iso.go:125] acquiring lock: {Name:mkc74493fbbc2007c645c4ed6349cf76e7fb2185 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 10:41:00.684302  385751 out.go:179] * Starting "calico-543676" primary control-plane node in "calico-543676" cluster
	I1101 10:41:00.685451  385751 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:41:00.685489  385751 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21832-344560/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 10:41:00.685499  385751 cache.go:59] Caching tarball of preloaded images
	I1101 10:41:00.685590  385751 preload.go:233] Found /home/jenkins/minikube-integration/21832-344560/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 10:41:00.685607  385751 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 10:41:00.685696  385751 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/calico-543676/config.json ...
	I1101 10:41:00.685714  385751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/calico-543676/config.json: {Name:mk5d825a8cb5831df2787818b9b178c43df2700e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:41:00.685850  385751 start.go:360] acquireMachinesLock for calico-543676: {Name:mkd221a68334bc82c567a9a06c8563af1e1c38bd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 10:41:00.685901  385751 start.go:364] duration metric: took 37.74µs to acquireMachinesLock for "calico-543676"
	I1101 10:41:00.685920  385751 start.go:93] Provisioning new machine with config: &{Name:calico-543676 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:calico-543676 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:41:00.685972  385751 start.go:125] createHost starting for "" (driver="kvm2")
	I1101 10:41:00.688280  385751 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1101 10:41:00.688496  385751 start.go:159] libmachine.API.Create for "calico-543676" (driver="kvm2")
	I1101 10:41:00.688535  385751 client.go:173] LocalClient.Create starting
	I1101 10:41:00.688613  385751 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21832-344560/.minikube/certs/ca.pem
	I1101 10:41:00.688653  385751 main.go:143] libmachine: Decoding PEM data...
	I1101 10:41:00.688674  385751 main.go:143] libmachine: Parsing certificate...
	I1101 10:41:00.688771  385751 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21832-344560/.minikube/certs/cert.pem
	I1101 10:41:00.688817  385751 main.go:143] libmachine: Decoding PEM data...
	I1101 10:41:00.688837  385751 main.go:143] libmachine: Parsing certificate...
	I1101 10:41:00.689299  385751 main.go:143] libmachine: creating domain...
	I1101 10:41:00.689316  385751 main.go:143] libmachine: creating network...
	I1101 10:41:00.691000  385751 main.go:143] libmachine: found existing default network
	I1101 10:41:00.691257  385751 main.go:143] libmachine: <network connections='4'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 10:41:00.692221  385751 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:c8:fe:e0} reservation:<nil>}
	I1101 10:41:00.693321  385751 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:0a:77:07} reservation:<nil>}
	I1101 10:41:00.693819  385751 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:98:ad:ac} reservation:<nil>}
	I1101 10:41:00.694701  385751 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001dd6d50}
	I1101 10:41:00.694814  385751 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-calico-543676</name>
	  <dns enable='no'/>
	  <ip address='192.168.72.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.72.2' end='192.168.72.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 10:41:00.702323  385751 main.go:143] libmachine: creating private network mk-calico-543676 192.168.72.0/24...
	I1101 10:41:00.790346  385751 main.go:143] libmachine: private network mk-calico-543676 192.168.72.0/24 created
	I1101 10:41:00.790630  385751 main.go:143] libmachine: <network>
	  <name>mk-calico-543676</name>
	  <uuid>4cc719ee-0ab9-4575-b2bb-fc0916463455</uuid>
	  <bridge name='virbr4' stp='on' delay='0'/>
	  <mac address='52:54:00:78:d8:74'/>
	  <dns enable='no'/>
	  <ip address='192.168.72.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.72.2' end='192.168.72.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 10:41:00.790673  385751 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21832-344560/.minikube/machines/calico-543676 ...
	I1101 10:41:00.790705  385751 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21832-344560/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso
	I1101 10:41:00.790725  385751 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21832-344560/.minikube
	I1101 10:41:00.790846  385751 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21832-344560/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21832-344560/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso...
	I1101 10:41:01.118016  385751 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21832-344560/.minikube/machines/calico-543676/id_rsa...
	I1101 10:41:01.355949  385751 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21832-344560/.minikube/machines/calico-543676/calico-543676.rawdisk...
	I1101 10:41:01.356002  385751 main.go:143] libmachine: Writing magic tar header
	I1101 10:41:01.356060  385751 main.go:143] libmachine: Writing SSH key tar header
	I1101 10:41:01.356206  385751 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21832-344560/.minikube/machines/calico-543676 ...
	I1101 10:41:01.356314  385751 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21832-344560/.minikube/machines/calico-543676
	I1101 10:41:01.356349  385751 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21832-344560/.minikube/machines/calico-543676 (perms=drwx------)
	I1101 10:41:01.356372  385751 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21832-344560/.minikube/machines
	I1101 10:41:01.356392  385751 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21832-344560/.minikube/machines (perms=drwxr-xr-x)
	I1101 10:41:01.356676  385751 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21832-344560/.minikube
	I1101 10:41:01.356700  385751 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21832-344560/.minikube (perms=drwxr-xr-x)
	I1101 10:41:01.356728  385751 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21832-344560
	I1101 10:41:01.356760  385751 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21832-344560 (perms=drwxrwxr-x)
	I1101 10:41:01.356781  385751 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1101 10:41:01.356796  385751 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1101 10:41:01.356813  385751 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1101 10:41:01.356828  385751 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1101 10:41:01.356845  385751 main.go:143] libmachine: checking permissions on dir: /home
	I1101 10:41:01.356859  385751 main.go:143] libmachine: skipping /home - not owner
	I1101 10:41:01.356885  385751 main.go:143] libmachine: defining domain...
	I1101 10:41:01.358438  385751 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>calico-543676</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21832-344560/.minikube/machines/calico-543676/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21832-344560/.minikube/machines/calico-543676/calico-543676.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-calico-543676'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1101 10:41:01.393102  385751 main.go:143] libmachine: domain calico-543676 has defined MAC address 52:54:00:97:d7:42 in network default
	I1101 10:41:01.393839  385751 main.go:143] libmachine: domain calico-543676 has defined MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:01.393892  385751 main.go:143] libmachine: starting domain...
	I1101 10:41:01.393901  385751 main.go:143] libmachine: ensuring networks are active...
	I1101 10:41:01.394726  385751 main.go:143] libmachine: Ensuring network default is active
	I1101 10:41:01.395274  385751 main.go:143] libmachine: Ensuring network mk-calico-543676 is active
	I1101 10:41:01.396022  385751 main.go:143] libmachine: getting domain XML...
	I1101 10:41:01.397259  385751 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>calico-543676</name>
	  <uuid>59bc0f40-3588-4b9b-806b-1cc0ce89a487</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21832-344560/.minikube/machines/calico-543676/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21832-344560/.minikube/machines/calico-543676/calico-543676.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:66:05:2e'/>
	      <source network='mk-calico-543676'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:97:d7:42'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1101 10:41:03.576792  385751 main.go:143] libmachine: waiting for domain to start...
	I1101 10:41:03.578508  385751 main.go:143] libmachine: domain is now running
	I1101 10:41:03.578533  385751 main.go:143] libmachine: waiting for IP...
	I1101 10:41:03.579450  385751 main.go:143] libmachine: domain calico-543676 has defined MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:03.580289  385751 main.go:143] libmachine: no network interface addresses found for domain calico-543676 (source=lease)
	I1101 10:41:03.580308  385751 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:41:03.580738  385751 main.go:143] libmachine: unable to find current IP address of domain calico-543676 in network mk-calico-543676 (interfaces detected: [])
	I1101 10:41:03.580812  385751 retry.go:31] will retry after 280.0772ms: waiting for domain to come up
	I1101 10:41:03.862582  385751 main.go:143] libmachine: domain calico-543676 has defined MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:03.863615  385751 main.go:143] libmachine: no network interface addresses found for domain calico-543676 (source=lease)
	I1101 10:41:03.863637  385751 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:41:03.864608  385751 main.go:143] libmachine: unable to find current IP address of domain calico-543676 in network mk-calico-543676 (interfaces detected: [])
	I1101 10:41:03.864656  385751 retry.go:31] will retry after 244.488965ms: waiting for domain to come up
	I1101 10:41:04.111333  385751 main.go:143] libmachine: domain calico-543676 has defined MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:04.112081  385751 main.go:143] libmachine: no network interface addresses found for domain calico-543676 (source=lease)
	I1101 10:41:04.112097  385751 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:41:04.112496  385751 main.go:143] libmachine: unable to find current IP address of domain calico-543676 in network mk-calico-543676 (interfaces detected: [])
	I1101 10:41:04.112533  385751 retry.go:31] will retry after 481.657137ms: waiting for domain to come up
	I1101 10:41:04.596173  385751 main.go:143] libmachine: domain calico-543676 has defined MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:04.597158  385751 main.go:143] libmachine: no network interface addresses found for domain calico-543676 (source=lease)
	I1101 10:41:04.597176  385751 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:41:04.597697  385751 main.go:143] libmachine: unable to find current IP address of domain calico-543676 in network mk-calico-543676 (interfaces detected: [])
	I1101 10:41:04.597748  385751 retry.go:31] will retry after 465.415081ms: waiting for domain to come up
	I1101 10:41:05.064577  385751 main.go:143] libmachine: domain calico-543676 has defined MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:05.065599  385751 main.go:143] libmachine: no network interface addresses found for domain calico-543676 (source=lease)
	I1101 10:41:05.065619  385751 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:41:05.066117  385751 main.go:143] libmachine: unable to find current IP address of domain calico-543676 in network mk-calico-543676 (interfaces detected: [])
	I1101 10:41:05.066164  385751 retry.go:31] will retry after 596.275609ms: waiting for domain to come up
	I1101 10:41:05.664474  385751 main.go:143] libmachine: domain calico-543676 has defined MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:05.665249  385751 main.go:143] libmachine: no network interface addresses found for domain calico-543676 (source=lease)
	I1101 10:41:05.665273  385751 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:41:05.665735  385751 main.go:143] libmachine: unable to find current IP address of domain calico-543676 in network mk-calico-543676 (interfaces detected: [])
	I1101 10:41:05.665796  385751 retry.go:31] will retry after 863.271838ms: waiting for domain to come up
	I1101 10:41:06.530275  385751 main.go:143] libmachine: domain calico-543676 has defined MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:06.531077  385751 main.go:143] libmachine: no network interface addresses found for domain calico-543676 (source=lease)
	I1101 10:41:06.531098  385751 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:41:06.531493  385751 main.go:143] libmachine: unable to find current IP address of domain calico-543676 in network mk-calico-543676 (interfaces detected: [])
	I1101 10:41:06.531533  385751 retry.go:31] will retry after 1.168760082s: waiting for domain to come up
	I1101 10:41:07.702530  385751 main.go:143] libmachine: domain calico-543676 has defined MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:07.703542  385751 main.go:143] libmachine: no network interface addresses found for domain calico-543676 (source=lease)
	I1101 10:41:07.703565  385751 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:41:07.704176  385751 main.go:143] libmachine: unable to find current IP address of domain calico-543676 in network mk-calico-543676 (interfaces detected: [])
	I1101 10:41:07.704225  385751 retry.go:31] will retry after 904.122275ms: waiting for domain to come up
	I1101 10:41:08.612794  385751 main.go:143] libmachine: domain calico-543676 has defined MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:08.613698  385751 main.go:143] libmachine: no network interface addresses found for domain calico-543676 (source=lease)
	I1101 10:41:08.613721  385751 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:41:08.614169  385751 main.go:143] libmachine: unable to find current IP address of domain calico-543676 in network mk-calico-543676 (interfaces detected: [])
	I1101 10:41:08.614214  385751 retry.go:31] will retry after 1.530739524s: waiting for domain to come up
	I1101 10:41:10.146337  385751 main.go:143] libmachine: domain calico-543676 has defined MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:10.147174  385751 main.go:143] libmachine: no network interface addresses found for domain calico-543676 (source=lease)
	I1101 10:41:10.147195  385751 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:41:10.147704  385751 main.go:143] libmachine: unable to find current IP address of domain calico-543676 in network mk-calico-543676 (interfaces detected: [])
	I1101 10:41:10.147749  385751 retry.go:31] will retry after 1.883329751s: waiting for domain to come up
	I1101 10:41:12.033654  385751 main.go:143] libmachine: domain calico-543676 has defined MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:12.034683  385751 main.go:143] libmachine: no network interface addresses found for domain calico-543676 (source=lease)
	I1101 10:41:12.034708  385751 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:41:12.035233  385751 main.go:143] libmachine: unable to find current IP address of domain calico-543676 in network mk-calico-543676 (interfaces detected: [])
	I1101 10:41:12.035299  385751 retry.go:31] will retry after 1.749769465s: waiting for domain to come up
	I1101 10:41:13.787164  385751 main.go:143] libmachine: domain calico-543676 has defined MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:13.787934  385751 main.go:143] libmachine: no network interface addresses found for domain calico-543676 (source=lease)
	I1101 10:41:13.787958  385751 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:41:13.788389  385751 main.go:143] libmachine: unable to find current IP address of domain calico-543676 in network mk-calico-543676 (interfaces detected: [])
	I1101 10:41:13.788435  385751 retry.go:31] will retry after 2.926677875s: waiting for domain to come up
	I1101 10:41:16.718211  385751 main.go:143] libmachine: domain calico-543676 has defined MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:16.720406  385751 main.go:143] libmachine: no network interface addresses found for domain calico-543676 (source=lease)
	I1101 10:41:16.720428  385751 main.go:143] libmachine: trying to list again with source=arp
	I1101 10:41:16.721599  385751 main.go:143] libmachine: unable to find current IP address of domain calico-543676 in network mk-calico-543676 (interfaces detected: [])
	I1101 10:41:16.721653  385751 retry.go:31] will retry after 3.875482383s: waiting for domain to come up
	I1101 10:41:20.601429  385751 main.go:143] libmachine: domain calico-543676 has defined MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:20.602310  385751 main.go:143] libmachine: domain calico-543676 has current primary IP address 192.168.72.199 and MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:20.602327  385751 main.go:143] libmachine: found domain IP: 192.168.72.199
	I1101 10:41:20.602335  385751 main.go:143] libmachine: reserving static IP address...
	I1101 10:41:20.602739  385751 main.go:143] libmachine: unable to find host DHCP lease matching {name: "calico-543676", mac: "52:54:00:66:05:2e", ip: "192.168.72.199"} in network mk-calico-543676
	I1101 10:41:20.825715  385751 main.go:143] libmachine: reserved static IP address 192.168.72.199 for domain calico-543676
	I1101 10:41:20.825772  385751 main.go:143] libmachine: waiting for SSH...
	I1101 10:41:20.825782  385751 main.go:143] libmachine: Getting to WaitForSSH function...
	I1101 10:41:20.829474  385751 main.go:143] libmachine: domain calico-543676 has defined MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:20.829991  385751 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:05:2e", ip: ""} in network mk-calico-543676: {Iface:virbr4 ExpiryTime:2025-11-01 11:41:19 +0000 UTC Type:0 Mac:52:54:00:66:05:2e Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:minikube Clientid:01:52:54:00:66:05:2e}
	I1101 10:41:20.830030  385751 main.go:143] libmachine: domain calico-543676 has defined IP address 192.168.72.199 and MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:20.830287  385751 main.go:143] libmachine: Using SSH client type: native
	I1101 10:41:20.830657  385751 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I1101 10:41:20.830675  385751 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1101 10:41:20.955849  385751 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:41:20.956330  385751 main.go:143] libmachine: domain creation complete
	I1101 10:41:20.958350  385751 machine.go:94] provisionDockerMachine start ...
	I1101 10:41:20.961714  385751 main.go:143] libmachine: domain calico-543676 has defined MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:20.962157  385751 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:05:2e", ip: ""} in network mk-calico-543676: {Iface:virbr4 ExpiryTime:2025-11-01 11:41:19 +0000 UTC Type:0 Mac:52:54:00:66:05:2e Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:calico-543676 Clientid:01:52:54:00:66:05:2e}
	I1101 10:41:20.962185  385751 main.go:143] libmachine: domain calico-543676 has defined IP address 192.168.72.199 and MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:20.962385  385751 main.go:143] libmachine: Using SSH client type: native
	I1101 10:41:20.962667  385751 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I1101 10:41:20.962683  385751 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 10:41:21.086885  385751 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1101 10:41:21.086919  385751 buildroot.go:166] provisioning hostname "calico-543676"
	I1101 10:41:21.090568  385751 main.go:143] libmachine: domain calico-543676 has defined MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:21.091091  385751 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:05:2e", ip: ""} in network mk-calico-543676: {Iface:virbr4 ExpiryTime:2025-11-01 11:41:19 +0000 UTC Type:0 Mac:52:54:00:66:05:2e Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:calico-543676 Clientid:01:52:54:00:66:05:2e}
	I1101 10:41:21.091126  385751 main.go:143] libmachine: domain calico-543676 has defined IP address 192.168.72.199 and MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:21.091397  385751 main.go:143] libmachine: Using SSH client type: native
	I1101 10:41:21.091665  385751 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I1101 10:41:21.091678  385751 main.go:143] libmachine: About to run SSH command:
	sudo hostname calico-543676 && echo "calico-543676" | sudo tee /etc/hostname
	I1101 10:41:21.234522  385751 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-543676
	
	I1101 10:41:21.238018  385751 main.go:143] libmachine: domain calico-543676 has defined MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:21.238546  385751 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:05:2e", ip: ""} in network mk-calico-543676: {Iface:virbr4 ExpiryTime:2025-11-01 11:41:19 +0000 UTC Type:0 Mac:52:54:00:66:05:2e Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:calico-543676 Clientid:01:52:54:00:66:05:2e}
	I1101 10:41:21.238583  385751 main.go:143] libmachine: domain calico-543676 has defined IP address 192.168.72.199 and MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:21.239097  385751 main.go:143] libmachine: Using SSH client type: native
	I1101 10:41:21.239361  385751 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I1101 10:41:21.239388  385751 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-543676' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-543676/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-543676' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 10:41:21.377171  385751 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 10:41:21.377214  385751 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21832-344560/.minikube CaCertPath:/home/jenkins/minikube-integration/21832-344560/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21832-344560/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21832-344560/.minikube}
	I1101 10:41:21.377243  385751 buildroot.go:174] setting up certificates
	I1101 10:41:21.377278  385751 provision.go:84] configureAuth start
	I1101 10:41:21.380592  385751 main.go:143] libmachine: domain calico-543676 has defined MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:21.381065  385751 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:05:2e", ip: ""} in network mk-calico-543676: {Iface:virbr4 ExpiryTime:2025-11-01 11:41:19 +0000 UTC Type:0 Mac:52:54:00:66:05:2e Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:calico-543676 Clientid:01:52:54:00:66:05:2e}
	I1101 10:41:21.381102  385751 main.go:143] libmachine: domain calico-543676 has defined IP address 192.168.72.199 and MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:21.383739  385751 main.go:143] libmachine: domain calico-543676 has defined MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:21.384057  385751 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:05:2e", ip: ""} in network mk-calico-543676: {Iface:virbr4 ExpiryTime:2025-11-01 11:41:19 +0000 UTC Type:0 Mac:52:54:00:66:05:2e Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:calico-543676 Clientid:01:52:54:00:66:05:2e}
	I1101 10:41:21.384083  385751 main.go:143] libmachine: domain calico-543676 has defined IP address 192.168.72.199 and MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:21.384214  385751 provision.go:143] copyHostCerts
	I1101 10:41:21.384275  385751 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-344560/.minikube/ca.pem, removing ...
	I1101 10:41:21.384296  385751 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-344560/.minikube/ca.pem
	I1101 10:41:21.384380  385751 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21832-344560/.minikube/ca.pem (1082 bytes)
	I1101 10:41:21.384485  385751 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-344560/.minikube/cert.pem, removing ...
	I1101 10:41:21.384496  385751 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-344560/.minikube/cert.pem
	I1101 10:41:21.384538  385751 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21832-344560/.minikube/cert.pem (1123 bytes)
	I1101 10:41:21.384621  385751 exec_runner.go:144] found /home/jenkins/minikube-integration/21832-344560/.minikube/key.pem, removing ...
	I1101 10:41:21.384630  385751 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21832-344560/.minikube/key.pem
	I1101 10:41:21.384668  385751 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21832-344560/.minikube/key.pem (1679 bytes)
	I1101 10:41:21.384743  385751 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21832-344560/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21832-344560/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21832-344560/.minikube/certs/ca-key.pem org=jenkins.calico-543676 san=[127.0.0.1 192.168.72.199 calico-543676 localhost minikube]
	I1101 10:41:21.520402  385751 provision.go:177] copyRemoteCerts
	I1101 10:41:21.520465  385751 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 10:41:21.523748  385751 main.go:143] libmachine: domain calico-543676 has defined MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:21.524214  385751 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:05:2e", ip: ""} in network mk-calico-543676: {Iface:virbr4 ExpiryTime:2025-11-01 11:41:19 +0000 UTC Type:0 Mac:52:54:00:66:05:2e Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:calico-543676 Clientid:01:52:54:00:66:05:2e}
	I1101 10:41:21.524246  385751 main.go:143] libmachine: domain calico-543676 has defined IP address 192.168.72.199 and MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:21.524390  385751 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/calico-543676/id_rsa Username:docker}
	I1101 10:41:21.618859  385751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1101 10:41:21.657543  385751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 10:41:21.692102  385751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 10:41:21.727307  385751 provision.go:87] duration metric: took 350.010017ms to configureAuth
	I1101 10:41:21.727338  385751 buildroot.go:189] setting minikube options for container-runtime
	I1101 10:41:21.727534  385751 config.go:182] Loaded profile config "calico-543676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:41:21.731193  385751 main.go:143] libmachine: domain calico-543676 has defined MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:21.731688  385751 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:05:2e", ip: ""} in network mk-calico-543676: {Iface:virbr4 ExpiryTime:2025-11-01 11:41:19 +0000 UTC Type:0 Mac:52:54:00:66:05:2e Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:calico-543676 Clientid:01:52:54:00:66:05:2e}
	I1101 10:41:21.731715  385751 main.go:143] libmachine: domain calico-543676 has defined IP address 192.168.72.199 and MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:21.731956  385751 main.go:143] libmachine: Using SSH client type: native
	I1101 10:41:21.732248  385751 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I1101 10:41:21.732278  385751 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 10:41:22.069655  385751 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 10:41:22.069693  385751 machine.go:97] duration metric: took 1.111323678s to provisionDockerMachine
	I1101 10:41:22.069707  385751 client.go:176] duration metric: took 21.38116073s to LocalClient.Create
	I1101 10:41:22.069723  385751 start.go:167] duration metric: took 21.381228279s to libmachine.API.Create "calico-543676"
	I1101 10:41:22.069737  385751 start.go:293] postStartSetup for "calico-543676" (driver="kvm2")
	I1101 10:41:22.069748  385751 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 10:41:22.069824  385751 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 10:41:22.075757  385751 main.go:143] libmachine: domain calico-543676 has defined MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:22.076523  385751 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:05:2e", ip: ""} in network mk-calico-543676: {Iface:virbr4 ExpiryTime:2025-11-01 11:41:19 +0000 UTC Type:0 Mac:52:54:00:66:05:2e Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:calico-543676 Clientid:01:52:54:00:66:05:2e}
	I1101 10:41:22.076562  385751 main.go:143] libmachine: domain calico-543676 has defined IP address 192.168.72.199 and MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:22.076756  385751 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/calico-543676/id_rsa Username:docker}
	I1101 10:41:22.176422  385751 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 10:41:22.182297  385751 info.go:137] Remote host: Buildroot 2025.02
	I1101 10:41:22.182322  385751 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-344560/.minikube/addons for local assets ...
	I1101 10:41:22.182373  385751 filesync.go:126] Scanning /home/jenkins/minikube-integration/21832-344560/.minikube/files for local assets ...
	I1101 10:41:22.182471  385751 filesync.go:149] local asset: /home/jenkins/minikube-integration/21832-344560/.minikube/files/etc/ssl/certs/3485182.pem -> 3485182.pem in /etc/ssl/certs
	I1101 10:41:22.182586  385751 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 10:41:22.201598  385751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/files/etc/ssl/certs/3485182.pem --> /etc/ssl/certs/3485182.pem (1708 bytes)
	I1101 10:41:22.246985  385751 start.go:296] duration metric: took 177.232591ms for postStartSetup
	I1101 10:41:22.252156  385751 main.go:143] libmachine: domain calico-543676 has defined MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:22.252737  385751 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:05:2e", ip: ""} in network mk-calico-543676: {Iface:virbr4 ExpiryTime:2025-11-01 11:41:19 +0000 UTC Type:0 Mac:52:54:00:66:05:2e Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:calico-543676 Clientid:01:52:54:00:66:05:2e}
	I1101 10:41:22.252769  385751 main.go:143] libmachine: domain calico-543676 has defined IP address 192.168.72.199 and MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:22.253183  385751 profile.go:143] Saving config to /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/calico-543676/config.json ...
	I1101 10:41:22.253445  385751 start.go:128] duration metric: took 21.567459227s to createHost
	I1101 10:41:22.256604  385751 main.go:143] libmachine: domain calico-543676 has defined MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:22.257087  385751 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:05:2e", ip: ""} in network mk-calico-543676: {Iface:virbr4 ExpiryTime:2025-11-01 11:41:19 +0000 UTC Type:0 Mac:52:54:00:66:05:2e Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:calico-543676 Clientid:01:52:54:00:66:05:2e}
	I1101 10:41:22.257117  385751 main.go:143] libmachine: domain calico-543676 has defined IP address 192.168.72.199 and MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:22.257432  385751 main.go:143] libmachine: Using SSH client type: native
	I1101 10:41:22.257680  385751 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I1101 10:41:22.257696  385751 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1101 10:41:22.390968  385751 main.go:143] libmachine: SSH cmd err, output: <nil>: 1761993682.337308688
	
	I1101 10:41:22.390992  385751 fix.go:216] guest clock: 1761993682.337308688
	I1101 10:41:22.391002  385751 fix.go:229] Guest: 2025-11-01 10:41:22.337308688 +0000 UTC Remote: 2025-11-01 10:41:22.253461277 +0000 UTC m=+21.682468316 (delta=83.847411ms)
	I1101 10:41:22.391024  385751 fix.go:200] guest clock delta is within tolerance: 83.847411ms
	I1101 10:41:22.391032  385751 start.go:83] releasing machines lock for "calico-543676", held for 21.705120757s
	I1101 10:41:22.394549  385751 main.go:143] libmachine: domain calico-543676 has defined MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:22.394987  385751 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:05:2e", ip: ""} in network mk-calico-543676: {Iface:virbr4 ExpiryTime:2025-11-01 11:41:19 +0000 UTC Type:0 Mac:52:54:00:66:05:2e Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:calico-543676 Clientid:01:52:54:00:66:05:2e}
	I1101 10:41:22.395022  385751 main.go:143] libmachine: domain calico-543676 has defined IP address 192.168.72.199 and MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:22.395682  385751 ssh_runner.go:195] Run: cat /version.json
	I1101 10:41:22.395707  385751 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 10:41:22.399827  385751 main.go:143] libmachine: domain calico-543676 has defined MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:22.400948  385751 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:05:2e", ip: ""} in network mk-calico-543676: {Iface:virbr4 ExpiryTime:2025-11-01 11:41:19 +0000 UTC Type:0 Mac:52:54:00:66:05:2e Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:calico-543676 Clientid:01:52:54:00:66:05:2e}
	I1101 10:41:22.400985  385751 main.go:143] libmachine: domain calico-543676 has defined IP address 192.168.72.199 and MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:22.400948  385751 main.go:143] libmachine: domain calico-543676 has defined MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:22.401349  385751 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/calico-543676/id_rsa Username:docker}
	I1101 10:41:22.401736  385751 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:05:2e", ip: ""} in network mk-calico-543676: {Iface:virbr4 ExpiryTime:2025-11-01 11:41:19 +0000 UTC Type:0 Mac:52:54:00:66:05:2e Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:calico-543676 Clientid:01:52:54:00:66:05:2e}
	I1101 10:41:22.401777  385751 main.go:143] libmachine: domain calico-543676 has defined IP address 192.168.72.199 and MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:22.401980  385751 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/calico-543676/id_rsa Username:docker}
	I1101 10:41:22.491149  385751 ssh_runner.go:195] Run: systemctl --version
	I1101 10:41:22.526905  385751 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 10:41:22.715697  385751 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 10:41:22.725767  385751 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 10:41:22.725850  385751 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 10:41:22.754908  385751 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 10:41:22.754937  385751 start.go:496] detecting cgroup driver to use...
	I1101 10:41:22.755025  385751 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 10:41:22.784444  385751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 10:41:22.815090  385751 docker.go:218] disabling cri-docker service (if available) ...
	I1101 10:41:22.815150  385751 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 10:41:22.851403  385751 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 10:41:22.877102  385751 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 10:41:23.076229  385751 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 10:41:23.355787  385751 docker.go:234] disabling docker service ...
	I1101 10:41:23.355858  385751 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 10:41:23.379608  385751 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 10:41:23.396635  385751 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 10:41:23.582938  385751 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 10:41:23.803701  385751 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 10:41:23.825988  385751 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 10:41:23.855704  385751 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 10:41:23.855784  385751 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:41:23.876694  385751 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 10:41:23.876766  385751 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:41:23.896378  385751 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:41:23.914640  385751 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:41:23.931621  385751 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 10:41:23.947521  385751 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:41:23.965951  385751 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:41:23.998558  385751 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 10:41:24.019314  385751 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 10:41:24.036573  385751 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 10:41:24.036639  385751 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 10:41:24.068839  385751 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 10:41:24.087618  385751 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:41:24.316981  385751 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 10:41:24.492543  385751 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 10:41:24.492623  385751 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 10:41:24.502005  385751 start.go:564] Will wait 60s for crictl version
	I1101 10:41:24.502075  385751 ssh_runner.go:195] Run: which crictl
	I1101 10:41:24.508587  385751 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 10:41:24.566334  385751 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1101 10:41:24.566484  385751 ssh_runner.go:195] Run: crio --version
	I1101 10:41:24.611624  385751 ssh_runner.go:195] Run: crio --version
	I1101 10:41:24.657336  385751 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1101 10:41:24.663092  385751 main.go:143] libmachine: domain calico-543676 has defined MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:24.663792  385751 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:05:2e", ip: ""} in network mk-calico-543676: {Iface:virbr4 ExpiryTime:2025-11-01 11:41:19 +0000 UTC Type:0 Mac:52:54:00:66:05:2e Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:calico-543676 Clientid:01:52:54:00:66:05:2e}
	I1101 10:41:24.663828  385751 main.go:143] libmachine: domain calico-543676 has defined IP address 192.168.72.199 and MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:24.664299  385751 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1101 10:41:24.670146  385751 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:41:24.691214  385751 kubeadm.go:884] updating cluster {Name:calico-543676 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:calico-543676 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.72.199 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 10:41:24.691348  385751 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 10:41:24.691518  385751 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:41:24.737348  385751 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1101 10:41:24.737420  385751 ssh_runner.go:195] Run: which lz4
	I1101 10:41:24.742763  385751 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1101 10:41:24.748757  385751 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 10:41:24.748785  385751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1101 10:41:26.593002  385751 crio.go:462] duration metric: took 1.85027034s to copy over tarball
	I1101 10:41:26.593739  385751 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 10:41:28.641245  385751 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.047466961s)
	I1101 10:41:28.641304  385751 crio.go:469] duration metric: took 2.048260394s to extract the tarball
	I1101 10:41:28.641324  385751 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 10:41:28.713024  385751 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 10:41:28.775431  385751 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 10:41:28.775468  385751 cache_images.go:86] Images are preloaded, skipping loading
	I1101 10:41:28.775479  385751 kubeadm.go:935] updating node { 192.168.72.199 8443 v1.34.1 crio true true} ...
	I1101 10:41:28.775592  385751 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-543676 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.199
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:calico-543676 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1101 10:41:28.775683  385751 ssh_runner.go:195] Run: crio config
	I1101 10:41:28.832335  385751 cni.go:84] Creating CNI manager for "calico"
	I1101 10:41:28.832376  385751 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 10:41:28.832406  385751 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.199 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-543676 NodeName:calico-543676 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.199"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.199 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 10:41:28.832567  385751 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.199
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-543676"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.199"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.199"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 10:41:28.832657  385751 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 10:41:28.848775  385751 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 10:41:28.848856  385751 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 10:41:28.863083  385751 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1101 10:41:28.887631  385751 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 10:41:28.916015  385751 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1101 10:41:28.941447  385751 ssh_runner.go:195] Run: grep 192.168.72.199	control-plane.minikube.internal$ /etc/hosts
	I1101 10:41:28.946933  385751 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.199	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 10:41:28.963128  385751 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:41:29.166079  385751 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:41:29.193625  385751 certs.go:69] Setting up /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/calico-543676 for IP: 192.168.72.199
	I1101 10:41:29.193653  385751 certs.go:195] generating shared ca certs ...
	I1101 10:41:29.193684  385751 certs.go:227] acquiring lock for ca certs: {Name:mkba0fe79f6b0ed99353299aaf34c6fbc547c6f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:41:29.193906  385751 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21832-344560/.minikube/ca.key
	I1101 10:41:29.193982  385751 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21832-344560/.minikube/proxy-client-ca.key
	I1101 10:41:29.193995  385751 certs.go:257] generating profile certs ...
	I1101 10:41:29.194077  385751 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/calico-543676/client.key
	I1101 10:41:29.194096  385751 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/calico-543676/client.crt with IP's: []
	I1101 10:41:29.938997  385751 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/calico-543676/client.crt ...
	I1101 10:41:29.939033  385751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/calico-543676/client.crt: {Name:mkb744453a31d537af85fbb9e105d996b73e6ad0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:41:29.939261  385751 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/calico-543676/client.key ...
	I1101 10:41:29.939280  385751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/calico-543676/client.key: {Name:mk5d1eaafa4e304e3b31cabc8015ff8fa232797b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:41:29.939406  385751 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/calico-543676/apiserver.key.cc5e2b7a
	I1101 10:41:29.939426  385751 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/calico-543676/apiserver.crt.cc5e2b7a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.199]
	I1101 10:41:30.618350  385751 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/calico-543676/apiserver.crt.cc5e2b7a ...
	I1101 10:41:30.618378  385751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/calico-543676/apiserver.crt.cc5e2b7a: {Name:mk6e5b92e9469420fdec11027c753e3c4faea6d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:41:30.618571  385751 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/calico-543676/apiserver.key.cc5e2b7a ...
	I1101 10:41:30.618589  385751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/calico-543676/apiserver.key.cc5e2b7a: {Name:mk5c1b185519e1cce795bc278e4dd668d1a96674 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:41:30.618692  385751 certs.go:382] copying /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/calico-543676/apiserver.crt.cc5e2b7a -> /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/calico-543676/apiserver.crt
	I1101 10:41:30.618791  385751 certs.go:386] copying /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/calico-543676/apiserver.key.cc5e2b7a -> /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/calico-543676/apiserver.key
	I1101 10:41:30.618887  385751 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/calico-543676/proxy-client.key
	I1101 10:41:30.618908  385751 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/calico-543676/proxy-client.crt with IP's: []
	I1101 10:41:31.080305  385751 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/calico-543676/proxy-client.crt ...
	I1101 10:41:31.080350  385751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/calico-543676/proxy-client.crt: {Name:mka744164fc0976718e7451cea332ff9813d9901 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:41:31.080555  385751 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/calico-543676/proxy-client.key ...
	I1101 10:41:31.080580  385751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/calico-543676/proxy-client.key: {Name:mk1f1eb32a973d9b0938fe8e1442b44cb062f362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:41:31.080844  385751 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/348518.pem (1338 bytes)
	W1101 10:41:31.080919  385751 certs.go:480] ignoring /home/jenkins/minikube-integration/21832-344560/.minikube/certs/348518_empty.pem, impossibly tiny 0 bytes
	I1101 10:41:31.080937  385751 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 10:41:31.080976  385751 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/ca.pem (1082 bytes)
	I1101 10:41:31.081013  385751 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/cert.pem (1123 bytes)
	I1101 10:41:31.081045  385751 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-344560/.minikube/certs/key.pem (1679 bytes)
	I1101 10:41:31.081108  385751 certs.go:484] found cert: /home/jenkins/minikube-integration/21832-344560/.minikube/files/etc/ssl/certs/3485182.pem (1708 bytes)
	I1101 10:41:31.081970  385751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 10:41:31.116420  385751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 10:41:31.147679  385751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 10:41:31.182657  385751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 10:41:31.212741  385751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/calico-543676/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1101 10:41:31.243610  385751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/calico-543676/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 10:41:31.272758  385751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/calico-543676/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 10:41:31.344838  385751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/calico-543676/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 10:41:31.383704  385751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 10:41:31.419454  385751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/certs/348518.pem --> /usr/share/ca-certificates/348518.pem (1338 bytes)
	I1101 10:41:31.453971  385751 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21832-344560/.minikube/files/etc/ssl/certs/3485182.pem --> /usr/share/ca-certificates/3485182.pem (1708 bytes)
	I1101 10:41:31.488304  385751 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 10:41:31.509968  385751 ssh_runner.go:195] Run: openssl version
	I1101 10:41:31.517276  385751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 10:41:31.534721  385751 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:41:31.540673  385751 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 09:27 /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:41:31.540742  385751 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 10:41:31.548975  385751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 10:41:31.567037  385751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/348518.pem && ln -fs /usr/share/ca-certificates/348518.pem /etc/ssl/certs/348518.pem"
	I1101 10:41:31.581123  385751 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/348518.pem
	I1101 10:41:31.587050  385751 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:34 /usr/share/ca-certificates/348518.pem
	I1101 10:41:31.587114  385751 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/348518.pem
	I1101 10:41:31.598950  385751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/348518.pem /etc/ssl/certs/51391683.0"
	I1101 10:41:31.618239  385751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3485182.pem && ln -fs /usr/share/ca-certificates/3485182.pem /etc/ssl/certs/3485182.pem"
	I1101 10:41:31.632161  385751 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3485182.pem
	I1101 10:41:31.638600  385751 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:34 /usr/share/ca-certificates/3485182.pem
	I1101 10:41:31.638665  385751 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3485182.pem
	I1101 10:41:31.650783  385751 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3485182.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 10:41:31.665101  385751 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 10:41:31.670184  385751 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 10:41:31.670260  385751 kubeadm.go:401] StartCluster: {Name:calico-543676 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:calico-543676 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.72.199 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 10:41:31.670355  385751 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 10:41:31.670434  385751 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 10:41:31.712937  385751 cri.go:89] found id: ""
	I1101 10:41:31.713011  385751 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 10:41:31.727890  385751 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 10:41:31.740351  385751 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 10:41:31.752657  385751 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 10:41:31.752684  385751 kubeadm.go:158] found existing configuration files:
	
	I1101 10:41:31.752738  385751 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 10:41:31.764974  385751 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 10:41:31.765053  385751 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 10:41:31.779397  385751 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 10:41:31.791445  385751 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 10:41:31.791504  385751 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 10:41:31.804293  385751 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 10:41:31.817044  385751 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 10:41:31.817103  385751 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 10:41:31.832290  385751 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 10:41:31.844343  385751 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 10:41:31.844395  385751 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 10:41:31.856589  385751 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1101 10:41:32.027834  385751 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 10:41:45.046842  385751 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 10:41:45.046944  385751 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 10:41:45.047059  385751 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 10:41:45.047191  385751 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 10:41:45.047304  385751 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 10:41:45.047397  385751 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 10:41:45.048756  385751 out.go:252]   - Generating certificates and keys ...
	I1101 10:41:45.048854  385751 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 10:41:45.048972  385751 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 10:41:45.049081  385751 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 10:41:45.049172  385751 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 10:41:45.049261  385751 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 10:41:45.049338  385751 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 10:41:45.049446  385751 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 10:41:45.049643  385751 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [calico-543676 localhost] and IPs [192.168.72.199 127.0.0.1 ::1]
	I1101 10:41:45.049719  385751 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 10:41:45.049947  385751 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [calico-543676 localhost] and IPs [192.168.72.199 127.0.0.1 ::1]
	I1101 10:41:45.050063  385751 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 10:41:45.050178  385751 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 10:41:45.050244  385751 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 10:41:45.050335  385751 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 10:41:45.050419  385751 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 10:41:45.050502  385751 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 10:41:45.050579  385751 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 10:41:45.050672  385751 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 10:41:45.050758  385751 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 10:41:45.050912  385751 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 10:41:45.051004  385751 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 10:41:45.052187  385751 out.go:252]   - Booting up control plane ...
	I1101 10:41:45.052328  385751 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 10:41:45.052421  385751 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 10:41:45.052512  385751 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 10:41:45.052630  385751 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 10:41:45.052744  385751 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 10:41:45.052875  385751 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 10:41:45.052955  385751 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 10:41:45.053017  385751 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 10:41:45.053215  385751 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 10:41:45.053335  385751 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 10:41:45.053388  385751 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501582364s
	I1101 10:41:45.053467  385751 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 10:41:45.053549  385751 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.72.199:8443/livez
	I1101 10:41:45.053631  385751 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 10:41:45.053697  385751 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 10:41:45.053782  385751 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.210014802s
	I1101 10:41:45.053888  385751 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.596432311s
	I1101 10:41:45.053965  385751 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.503464868s
	I1101 10:41:45.054099  385751 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 10:41:45.054258  385751 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 10:41:45.054345  385751 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 10:41:45.054589  385751 kubeadm.go:319] [mark-control-plane] Marking the node calico-543676 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 10:41:45.054670  385751 kubeadm.go:319] [bootstrap-token] Using token: 9pcc3j.9igylvvyv7a09weo
	I1101 10:41:45.056508  385751 out.go:252]   - Configuring RBAC rules ...
	I1101 10:41:45.056647  385751 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 10:41:45.056768  385751 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 10:41:45.056914  385751 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 10:41:45.057091  385751 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 10:41:45.057227  385751 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 10:41:45.057403  385751 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 10:41:45.057552  385751 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 10:41:45.057599  385751 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 10:41:45.057638  385751 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 10:41:45.057644  385751 kubeadm.go:319] 
	I1101 10:41:45.057756  385751 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 10:41:45.057780  385751 kubeadm.go:319] 
	I1101 10:41:45.057906  385751 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 10:41:45.057917  385751 kubeadm.go:319] 
	I1101 10:41:45.057952  385751 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 10:41:45.058046  385751 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 10:41:45.058132  385751 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 10:41:45.058143  385751 kubeadm.go:319] 
	I1101 10:41:45.058215  385751 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 10:41:45.058226  385751 kubeadm.go:319] 
	I1101 10:41:45.058296  385751 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 10:41:45.058306  385751 kubeadm.go:319] 
	I1101 10:41:45.058375  385751 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 10:41:45.058483  385751 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 10:41:45.058587  385751 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 10:41:45.058601  385751 kubeadm.go:319] 
	I1101 10:41:45.058712  385751 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 10:41:45.058821  385751 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 10:41:45.058831  385751 kubeadm.go:319] 
	I1101 10:41:45.058983  385751 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 9pcc3j.9igylvvyv7a09weo \
	I1101 10:41:45.059224  385751 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8453eb9bfec31a6f8a04d37b2b2ee7df64866720c9de26f8457973b66dd9966b \
	I1101 10:41:45.059256  385751 kubeadm.go:319] 	--control-plane 
	I1101 10:41:45.059260  385751 kubeadm.go:319] 
	I1101 10:41:45.059331  385751 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 10:41:45.059337  385751 kubeadm.go:319] 
	I1101 10:41:45.059406  385751 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 9pcc3j.9igylvvyv7a09weo \
	I1101 10:41:45.059505  385751 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8453eb9bfec31a6f8a04d37b2b2ee7df64866720c9de26f8457973b66dd9966b 
	I1101 10:41:45.059516  385751 cni.go:84] Creating CNI manager for "calico"
	I1101 10:41:45.061853  385751 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I1101 10:41:45.064737  385751 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 10:41:45.064763  385751 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (539470 bytes)
	I1101 10:41:45.094886  385751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 10:41:47.051956  385751 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.957028334s)
	I1101 10:41:47.052066  385751 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 10:41:47.052239  385751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:41:47.052315  385751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-543676 minikube.k8s.io/updated_at=2025_11_01T10_41_47_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2f4da0d06f3d83242f5e93b9b09cfef44c5a595d minikube.k8s.io/name=calico-543676 minikube.k8s.io/primary=true
	I1101 10:41:47.090502  385751 ops.go:34] apiserver oom_adj: -16
	I1101 10:41:47.290768  385751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:41:47.791076  385751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:41:48.290897  385751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:41:48.791751  385751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:41:49.291002  385751 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 10:41:49.544587  385751 kubeadm.go:1114] duration metric: took 2.492403763s to wait for elevateKubeSystemPrivileges
	I1101 10:41:49.544630  385751 kubeadm.go:403] duration metric: took 17.874383669s to StartCluster
	I1101 10:41:49.544656  385751 settings.go:142] acquiring lock: {Name:mk0cdfdd584044c1d93f88e46e35ef3af10fed81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:41:49.544742  385751 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21832-344560/kubeconfig
	I1101 10:41:49.545953  385751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21832-344560/kubeconfig: {Name:mkaf75364e29c8ee4b260af678d355333969cf4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 10:41:49.546284  385751 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 10:41:49.546325  385751 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.72.199 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 10:41:49.546448  385751 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 10:41:49.546553  385751 config.go:182] Loaded profile config "calico-543676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:41:49.546564  385751 addons.go:70] Setting default-storageclass=true in profile "calico-543676"
	I1101 10:41:49.546581  385751 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "calico-543676"
	I1101 10:41:49.546555  385751 addons.go:70] Setting storage-provisioner=true in profile "calico-543676"
	I1101 10:41:49.546607  385751 addons.go:239] Setting addon storage-provisioner=true in "calico-543676"
	I1101 10:41:49.546641  385751 host.go:66] Checking if "calico-543676" exists ...
	I1101 10:41:49.548108  385751 out.go:179] * Verifying Kubernetes components...
	I1101 10:41:49.549945  385751 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 10:41:49.551505  385751 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 10:41:49.552090  385751 addons.go:239] Setting addon default-storageclass=true in "calico-543676"
	I1101 10:41:49.552132  385751 host.go:66] Checking if "calico-543676" exists ...
	I1101 10:41:49.553952  385751 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 10:41:49.553970  385751 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 10:41:49.556051  385751 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:41:49.556068  385751 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 10:41:49.556712  385751 main.go:143] libmachine: domain calico-543676 has defined MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:49.557281  385751 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:05:2e", ip: ""} in network mk-calico-543676: {Iface:virbr4 ExpiryTime:2025-11-01 11:41:19 +0000 UTC Type:0 Mac:52:54:00:66:05:2e Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:calico-543676 Clientid:01:52:54:00:66:05:2e}
	I1101 10:41:49.557316  385751 main.go:143] libmachine: domain calico-543676 has defined IP address 192.168.72.199 and MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:49.557603  385751 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/calico-543676/id_rsa Username:docker}
	I1101 10:41:49.559615  385751 main.go:143] libmachine: domain calico-543676 has defined MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:49.560153  385751 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:66:05:2e", ip: ""} in network mk-calico-543676: {Iface:virbr4 ExpiryTime:2025-11-01 11:41:19 +0000 UTC Type:0 Mac:52:54:00:66:05:2e Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:calico-543676 Clientid:01:52:54:00:66:05:2e}
	I1101 10:41:49.560184  385751 main.go:143] libmachine: domain calico-543676 has defined IP address 192.168.72.199 and MAC address 52:54:00:66:05:2e in network mk-calico-543676
	I1101 10:41:49.560394  385751 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/calico-543676/id_rsa Username:docker}
	I1101 10:41:49.949781  385751 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 10:41:49.950011  385751 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 10:41:50.305330  385751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 10:41:50.307590  385751 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 10:41:50.608765  385751 start.go:977] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1101 10:41:50.609699  385751 node_ready.go:35] waiting up to 15m0s for node "calico-543676" to be "Ready" ...
	I1101 10:41:50.937363  385751 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 10:41:50.938897  385751 addons.go:515] duration metric: took 1.392441804s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 10:41:51.115701  385751 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-543676" context rescaled to 1 replicas
	W1101 10:41:53.250050  385751 node_ready.go:57] node "calico-543676" has "Ready":"False" status (will retry)
	W1101 10:41:55.614656  385751 node_ready.go:57] node "calico-543676" has "Ready":"False" status (will retry)
	W1101 10:41:57.615734  385751 node_ready.go:57] node "calico-543676" has "Ready":"False" status (will retry)
	I1101 10:41:58.629872  385751 node_ready.go:49] node "calico-543676" is "Ready"
	I1101 10:41:58.629918  385751 node_ready.go:38] duration metric: took 8.020158278s for node "calico-543676" to be "Ready" ...
	I1101 10:41:58.629936  385751 api_server.go:52] waiting for apiserver process to appear ...
	I1101 10:41:58.629995  385751 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:41:58.679209  385751 api_server.go:72] duration metric: took 9.132838691s to wait for apiserver process to appear ...
	I1101 10:41:58.679241  385751 api_server.go:88] waiting for apiserver healthz status ...
	I1101 10:41:58.679286  385751 api_server.go:253] Checking apiserver healthz at https://192.168.72.199:8443/healthz ...
	I1101 10:41:58.688032  385751 api_server.go:279] https://192.168.72.199:8443/healthz returned 200:
	ok
	I1101 10:41:58.691904  385751 api_server.go:141] control plane version: v1.34.1
	I1101 10:41:58.691932  385751 api_server.go:131] duration metric: took 12.682997ms to wait for apiserver health ...
	I1101 10:41:58.691943  385751 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 10:41:58.697405  385751 system_pods.go:59] 9 kube-system pods found
	I1101 10:41:58.697459  385751 system_pods.go:61] "calico-kube-controllers-59556d9b4c-6qfp9" [9cac41bd-4116-42e7-96fa-d2834e565f6d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1101 10:41:58.697475  385751 system_pods.go:61] "calico-node-xlwkq" [fb4131cb-434e-4d14-812b-82cf4105077c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1101 10:41:58.697486  385751 system_pods.go:61] "coredns-66bc5c9577-cxhsm" [5ff73b67-e932-4e0f-acf3-5820d0d718cf] Pending
	I1101 10:41:58.697494  385751 system_pods.go:61] "etcd-calico-543676" [33778a59-e79d-4568-883a-aa66c29e455e] Running
	I1101 10:41:58.697507  385751 system_pods.go:61] "kube-apiserver-calico-543676" [363487b4-1089-4af8-9ddc-ee60ff70d0ca] Running
	I1101 10:41:58.697514  385751 system_pods.go:61] "kube-controller-manager-calico-543676" [1f02e6ca-7d7e-4607-a6f9-5c70ef65cd0b] Running
	I1101 10:41:58.697525  385751 system_pods.go:61] "kube-proxy-4f2v7" [d833ef37-9852-4c57-b168-227f701fe093] Running
	I1101 10:41:58.697531  385751 system_pods.go:61] "kube-scheduler-calico-543676" [e92217e0-526b-458f-af14-cf9fb0d1cfb0] Running
	I1101 10:41:58.697544  385751 system_pods.go:61] "storage-provisioner" [e7b87718-af81-4ff9-b1ce-abffe72c7810] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:41:58.697552  385751 system_pods.go:74] duration metric: took 5.602297ms to wait for pod list to return data ...
	I1101 10:41:58.697568  385751 default_sa.go:34] waiting for default service account to be created ...
	I1101 10:41:58.702161  385751 default_sa.go:45] found service account: "default"
	I1101 10:41:58.702183  385751 default_sa.go:55] duration metric: took 4.607242ms for default service account to be created ...
	I1101 10:41:58.702193  385751 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 10:41:58.708450  385751 system_pods.go:86] 9 kube-system pods found
	I1101 10:41:58.708481  385751 system_pods.go:89] "calico-kube-controllers-59556d9b4c-6qfp9" [9cac41bd-4116-42e7-96fa-d2834e565f6d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1101 10:41:58.708500  385751 system_pods.go:89] "calico-node-xlwkq" [fb4131cb-434e-4d14-812b-82cf4105077c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1101 10:41:58.708511  385751 system_pods.go:89] "coredns-66bc5c9577-cxhsm" [5ff73b67-e932-4e0f-acf3-5820d0d718cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:41:58.708518  385751 system_pods.go:89] "etcd-calico-543676" [33778a59-e79d-4568-883a-aa66c29e455e] Running
	I1101 10:41:58.708528  385751 system_pods.go:89] "kube-apiserver-calico-543676" [363487b4-1089-4af8-9ddc-ee60ff70d0ca] Running
	I1101 10:41:58.708534  385751 system_pods.go:89] "kube-controller-manager-calico-543676" [1f02e6ca-7d7e-4607-a6f9-5c70ef65cd0b] Running
	I1101 10:41:58.708541  385751 system_pods.go:89] "kube-proxy-4f2v7" [d833ef37-9852-4c57-b168-227f701fe093] Running
	I1101 10:41:58.708548  385751 system_pods.go:89] "kube-scheduler-calico-543676" [e92217e0-526b-458f-af14-cf9fb0d1cfb0] Running
	I1101 10:41:58.708558  385751 system_pods.go:89] "storage-provisioner" [e7b87718-af81-4ff9-b1ce-abffe72c7810] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:41:58.708577  385751 retry.go:31] will retry after 250.402417ms: missing components: kube-dns
	I1101 10:41:58.974830  385751 system_pods.go:86] 9 kube-system pods found
	I1101 10:41:58.974893  385751 system_pods.go:89] "calico-kube-controllers-59556d9b4c-6qfp9" [9cac41bd-4116-42e7-96fa-d2834e565f6d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1101 10:41:58.974907  385751 system_pods.go:89] "calico-node-xlwkq" [fb4131cb-434e-4d14-812b-82cf4105077c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1101 10:41:58.974917  385751 system_pods.go:89] "coredns-66bc5c9577-cxhsm" [5ff73b67-e932-4e0f-acf3-5820d0d718cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:41:58.974927  385751 system_pods.go:89] "etcd-calico-543676" [33778a59-e79d-4568-883a-aa66c29e455e] Running
	I1101 10:41:58.974936  385751 system_pods.go:89] "kube-apiserver-calico-543676" [363487b4-1089-4af8-9ddc-ee60ff70d0ca] Running
	I1101 10:41:58.974947  385751 system_pods.go:89] "kube-controller-manager-calico-543676" [1f02e6ca-7d7e-4607-a6f9-5c70ef65cd0b] Running
	I1101 10:41:58.974957  385751 system_pods.go:89] "kube-proxy-4f2v7" [d833ef37-9852-4c57-b168-227f701fe093] Running
	I1101 10:41:58.974963  385751 system_pods.go:89] "kube-scheduler-calico-543676" [e92217e0-526b-458f-af14-cf9fb0d1cfb0] Running
	I1101 10:41:58.974971  385751 system_pods.go:89] "storage-provisioner" [e7b87718-af81-4ff9-b1ce-abffe72c7810] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:41:58.974995  385751 retry.go:31] will retry after 373.127885ms: missing components: kube-dns
	I1101 10:41:59.356522  385751 system_pods.go:86] 9 kube-system pods found
	I1101 10:41:59.356566  385751 system_pods.go:89] "calico-kube-controllers-59556d9b4c-6qfp9" [9cac41bd-4116-42e7-96fa-d2834e565f6d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1101 10:41:59.356579  385751 system_pods.go:89] "calico-node-xlwkq" [fb4131cb-434e-4d14-812b-82cf4105077c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1101 10:41:59.356591  385751 system_pods.go:89] "coredns-66bc5c9577-cxhsm" [5ff73b67-e932-4e0f-acf3-5820d0d718cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:41:59.356598  385751 system_pods.go:89] "etcd-calico-543676" [33778a59-e79d-4568-883a-aa66c29e455e] Running
	I1101 10:41:59.356606  385751 system_pods.go:89] "kube-apiserver-calico-543676" [363487b4-1089-4af8-9ddc-ee60ff70d0ca] Running
	I1101 10:41:59.356613  385751 system_pods.go:89] "kube-controller-manager-calico-543676" [1f02e6ca-7d7e-4607-a6f9-5c70ef65cd0b] Running
	I1101 10:41:59.356628  385751 system_pods.go:89] "kube-proxy-4f2v7" [d833ef37-9852-4c57-b168-227f701fe093] Running
	I1101 10:41:59.356632  385751 system_pods.go:89] "kube-scheduler-calico-543676" [e92217e0-526b-458f-af14-cf9fb0d1cfb0] Running
	I1101 10:41:59.356640  385751 system_pods.go:89] "storage-provisioner" [e7b87718-af81-4ff9-b1ce-abffe72c7810] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 10:41:59.356668  385751 retry.go:31] will retry after 296.085509ms: missing components: kube-dns
	I1101 10:41:59.657939  385751 system_pods.go:86] 9 kube-system pods found
	I1101 10:41:59.657969  385751 system_pods.go:89] "calico-kube-controllers-59556d9b4c-6qfp9" [9cac41bd-4116-42e7-96fa-d2834e565f6d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1101 10:41:59.657977  385751 system_pods.go:89] "calico-node-xlwkq" [fb4131cb-434e-4d14-812b-82cf4105077c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1101 10:41:59.657984  385751 system_pods.go:89] "coredns-66bc5c9577-cxhsm" [5ff73b67-e932-4e0f-acf3-5820d0d718cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:41:59.657988  385751 system_pods.go:89] "etcd-calico-543676" [33778a59-e79d-4568-883a-aa66c29e455e] Running
	I1101 10:41:59.657993  385751 system_pods.go:89] "kube-apiserver-calico-543676" [363487b4-1089-4af8-9ddc-ee60ff70d0ca] Running
	I1101 10:41:59.657996  385751 system_pods.go:89] "kube-controller-manager-calico-543676" [1f02e6ca-7d7e-4607-a6f9-5c70ef65cd0b] Running
	I1101 10:41:59.658000  385751 system_pods.go:89] "kube-proxy-4f2v7" [d833ef37-9852-4c57-b168-227f701fe093] Running
	I1101 10:41:59.658004  385751 system_pods.go:89] "kube-scheduler-calico-543676" [e92217e0-526b-458f-af14-cf9fb0d1cfb0] Running
	I1101 10:41:59.658009  385751 system_pods.go:89] "storage-provisioner" [e7b87718-af81-4ff9-b1ce-abffe72c7810] Running
	I1101 10:41:59.658027  385751 retry.go:31] will retry after 508.857082ms: missing components: kube-dns
	I1101 10:42:00.172935  385751 system_pods.go:86] 9 kube-system pods found
	I1101 10:42:00.172984  385751 system_pods.go:89] "calico-kube-controllers-59556d9b4c-6qfp9" [9cac41bd-4116-42e7-96fa-d2834e565f6d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1101 10:42:00.172999  385751 system_pods.go:89] "calico-node-xlwkq" [fb4131cb-434e-4d14-812b-82cf4105077c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1101 10:42:00.173014  385751 system_pods.go:89] "coredns-66bc5c9577-cxhsm" [5ff73b67-e932-4e0f-acf3-5820d0d718cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:42:00.173029  385751 system_pods.go:89] "etcd-calico-543676" [33778a59-e79d-4568-883a-aa66c29e455e] Running
	I1101 10:42:00.173037  385751 system_pods.go:89] "kube-apiserver-calico-543676" [363487b4-1089-4af8-9ddc-ee60ff70d0ca] Running
	I1101 10:42:00.173050  385751 system_pods.go:89] "kube-controller-manager-calico-543676" [1f02e6ca-7d7e-4607-a6f9-5c70ef65cd0b] Running
	I1101 10:42:00.173057  385751 system_pods.go:89] "kube-proxy-4f2v7" [d833ef37-9852-4c57-b168-227f701fe093] Running
	I1101 10:42:00.173066  385751 system_pods.go:89] "kube-scheduler-calico-543676" [e92217e0-526b-458f-af14-cf9fb0d1cfb0] Running
	I1101 10:42:00.173078  385751 system_pods.go:89] "storage-provisioner" [e7b87718-af81-4ff9-b1ce-abffe72c7810] Running
	I1101 10:42:00.173105  385751 retry.go:31] will retry after 562.665735ms: missing components: kube-dns
	I1101 10:42:00.739892  385751 system_pods.go:86] 9 kube-system pods found
	I1101 10:42:00.739938  385751 system_pods.go:89] "calico-kube-controllers-59556d9b4c-6qfp9" [9cac41bd-4116-42e7-96fa-d2834e565f6d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1101 10:42:00.739950  385751 system_pods.go:89] "calico-node-xlwkq" [fb4131cb-434e-4d14-812b-82cf4105077c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1101 10:42:00.739961  385751 system_pods.go:89] "coredns-66bc5c9577-cxhsm" [5ff73b67-e932-4e0f-acf3-5820d0d718cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:42:00.739967  385751 system_pods.go:89] "etcd-calico-543676" [33778a59-e79d-4568-883a-aa66c29e455e] Running
	I1101 10:42:00.739975  385751 system_pods.go:89] "kube-apiserver-calico-543676" [363487b4-1089-4af8-9ddc-ee60ff70d0ca] Running
	I1101 10:42:00.739979  385751 system_pods.go:89] "kube-controller-manager-calico-543676" [1f02e6ca-7d7e-4607-a6f9-5c70ef65cd0b] Running
	I1101 10:42:00.739983  385751 system_pods.go:89] "kube-proxy-4f2v7" [d833ef37-9852-4c57-b168-227f701fe093] Running
	I1101 10:42:00.739987  385751 system_pods.go:89] "kube-scheduler-calico-543676" [e92217e0-526b-458f-af14-cf9fb0d1cfb0] Running
	I1101 10:42:00.739990  385751 system_pods.go:89] "storage-provisioner" [e7b87718-af81-4ff9-b1ce-abffe72c7810] Running
	I1101 10:42:00.740006  385751 retry.go:31] will retry after 610.22282ms: missing components: kube-dns
	I1101 10:42:01.354508  385751 system_pods.go:86] 9 kube-system pods found
	I1101 10:42:01.354553  385751 system_pods.go:89] "calico-kube-controllers-59556d9b4c-6qfp9" [9cac41bd-4116-42e7-96fa-d2834e565f6d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1101 10:42:01.354565  385751 system_pods.go:89] "calico-node-xlwkq" [fb4131cb-434e-4d14-812b-82cf4105077c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1101 10:42:01.354575  385751 system_pods.go:89] "coredns-66bc5c9577-cxhsm" [5ff73b67-e932-4e0f-acf3-5820d0d718cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:42:01.354581  385751 system_pods.go:89] "etcd-calico-543676" [33778a59-e79d-4568-883a-aa66c29e455e] Running
	I1101 10:42:01.354588  385751 system_pods.go:89] "kube-apiserver-calico-543676" [363487b4-1089-4af8-9ddc-ee60ff70d0ca] Running
	I1101 10:42:01.354594  385751 system_pods.go:89] "kube-controller-manager-calico-543676" [1f02e6ca-7d7e-4607-a6f9-5c70ef65cd0b] Running
	I1101 10:42:01.354603  385751 system_pods.go:89] "kube-proxy-4f2v7" [d833ef37-9852-4c57-b168-227f701fe093] Running
	I1101 10:42:01.354608  385751 system_pods.go:89] "kube-scheduler-calico-543676" [e92217e0-526b-458f-af14-cf9fb0d1cfb0] Running
	I1101 10:42:01.354613  385751 system_pods.go:89] "storage-provisioner" [e7b87718-af81-4ff9-b1ce-abffe72c7810] Running
	I1101 10:42:01.354632  385751 retry.go:31] will retry after 957.353712ms: missing components: kube-dns
	I1101 10:42:02.317753  385751 system_pods.go:86] 9 kube-system pods found
	I1101 10:42:02.317803  385751 system_pods.go:89] "calico-kube-controllers-59556d9b4c-6qfp9" [9cac41bd-4116-42e7-96fa-d2834e565f6d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1101 10:42:02.317819  385751 system_pods.go:89] "calico-node-xlwkq" [fb4131cb-434e-4d14-812b-82cf4105077c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1101 10:42:02.317829  385751 system_pods.go:89] "coredns-66bc5c9577-cxhsm" [5ff73b67-e932-4e0f-acf3-5820d0d718cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:42:02.317835  385751 system_pods.go:89] "etcd-calico-543676" [33778a59-e79d-4568-883a-aa66c29e455e] Running
	I1101 10:42:02.317843  385751 system_pods.go:89] "kube-apiserver-calico-543676" [363487b4-1089-4af8-9ddc-ee60ff70d0ca] Running
	I1101 10:42:02.317849  385751 system_pods.go:89] "kube-controller-manager-calico-543676" [1f02e6ca-7d7e-4607-a6f9-5c70ef65cd0b] Running
	I1101 10:42:02.317857  385751 system_pods.go:89] "kube-proxy-4f2v7" [d833ef37-9852-4c57-b168-227f701fe093] Running
	I1101 10:42:02.317878  385751 system_pods.go:89] "kube-scheduler-calico-543676" [e92217e0-526b-458f-af14-cf9fb0d1cfb0] Running
	I1101 10:42:02.317888  385751 system_pods.go:89] "storage-provisioner" [e7b87718-af81-4ff9-b1ce-abffe72c7810] Running
	I1101 10:42:02.317911  385751 retry.go:31] will retry after 1.070990516s: missing components: kube-dns
	I1101 10:42:03.395555  385751 system_pods.go:86] 9 kube-system pods found
	I1101 10:42:03.395590  385751 system_pods.go:89] "calico-kube-controllers-59556d9b4c-6qfp9" [9cac41bd-4116-42e7-96fa-d2834e565f6d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1101 10:42:03.395599  385751 system_pods.go:89] "calico-node-xlwkq" [fb4131cb-434e-4d14-812b-82cf4105077c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1101 10:42:03.395608  385751 system_pods.go:89] "coredns-66bc5c9577-cxhsm" [5ff73b67-e932-4e0f-acf3-5820d0d718cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:42:03.395613  385751 system_pods.go:89] "etcd-calico-543676" [33778a59-e79d-4568-883a-aa66c29e455e] Running
	I1101 10:42:03.395617  385751 system_pods.go:89] "kube-apiserver-calico-543676" [363487b4-1089-4af8-9ddc-ee60ff70d0ca] Running
	I1101 10:42:03.395620  385751 system_pods.go:89] "kube-controller-manager-calico-543676" [1f02e6ca-7d7e-4607-a6f9-5c70ef65cd0b] Running
	I1101 10:42:03.395624  385751 system_pods.go:89] "kube-proxy-4f2v7" [d833ef37-9852-4c57-b168-227f701fe093] Running
	I1101 10:42:03.395630  385751 system_pods.go:89] "kube-scheduler-calico-543676" [e92217e0-526b-458f-af14-cf9fb0d1cfb0] Running
	I1101 10:42:03.395636  385751 system_pods.go:89] "storage-provisioner" [e7b87718-af81-4ff9-b1ce-abffe72c7810] Running
	I1101 10:42:03.395654  385751 retry.go:31] will retry after 1.767733631s: missing components: kube-dns
	I1101 10:42:05.170484  385751 system_pods.go:86] 9 kube-system pods found
	I1101 10:42:05.170516  385751 system_pods.go:89] "calico-kube-controllers-59556d9b4c-6qfp9" [9cac41bd-4116-42e7-96fa-d2834e565f6d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1101 10:42:05.170529  385751 system_pods.go:89] "calico-node-xlwkq" [fb4131cb-434e-4d14-812b-82cf4105077c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1101 10:42:05.170539  385751 system_pods.go:89] "coredns-66bc5c9577-cxhsm" [5ff73b67-e932-4e0f-acf3-5820d0d718cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:42:05.170552  385751 system_pods.go:89] "etcd-calico-543676" [33778a59-e79d-4568-883a-aa66c29e455e] Running
	I1101 10:42:05.170560  385751 system_pods.go:89] "kube-apiserver-calico-543676" [363487b4-1089-4af8-9ddc-ee60ff70d0ca] Running
	I1101 10:42:05.170566  385751 system_pods.go:89] "kube-controller-manager-calico-543676" [1f02e6ca-7d7e-4607-a6f9-5c70ef65cd0b] Running
	I1101 10:42:05.170572  385751 system_pods.go:89] "kube-proxy-4f2v7" [d833ef37-9852-4c57-b168-227f701fe093] Running
	I1101 10:42:05.170581  385751 system_pods.go:89] "kube-scheduler-calico-543676" [e92217e0-526b-458f-af14-cf9fb0d1cfb0] Running
	I1101 10:42:05.170586  385751 system_pods.go:89] "storage-provisioner" [e7b87718-af81-4ff9-b1ce-abffe72c7810] Running
	I1101 10:42:05.170603  385751 retry.go:31] will retry after 1.889857264s: missing components: kube-dns
	I1101 10:42:07.065745  385751 system_pods.go:86] 9 kube-system pods found
	I1101 10:42:07.065780  385751 system_pods.go:89] "calico-kube-controllers-59556d9b4c-6qfp9" [9cac41bd-4116-42e7-96fa-d2834e565f6d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1101 10:42:07.065792  385751 system_pods.go:89] "calico-node-xlwkq" [fb4131cb-434e-4d14-812b-82cf4105077c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1101 10:42:07.065805  385751 system_pods.go:89] "coredns-66bc5c9577-cxhsm" [5ff73b67-e932-4e0f-acf3-5820d0d718cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:42:07.065811  385751 system_pods.go:89] "etcd-calico-543676" [33778a59-e79d-4568-883a-aa66c29e455e] Running
	I1101 10:42:07.065824  385751 system_pods.go:89] "kube-apiserver-calico-543676" [363487b4-1089-4af8-9ddc-ee60ff70d0ca] Running
	I1101 10:42:07.065829  385751 system_pods.go:89] "kube-controller-manager-calico-543676" [1f02e6ca-7d7e-4607-a6f9-5c70ef65cd0b] Running
	I1101 10:42:07.065835  385751 system_pods.go:89] "kube-proxy-4f2v7" [d833ef37-9852-4c57-b168-227f701fe093] Running
	I1101 10:42:07.065840  385751 system_pods.go:89] "kube-scheduler-calico-543676" [e92217e0-526b-458f-af14-cf9fb0d1cfb0] Running
	I1101 10:42:07.065845  385751 system_pods.go:89] "storage-provisioner" [e7b87718-af81-4ff9-b1ce-abffe72c7810] Running
	I1101 10:42:07.065885  385751 retry.go:31] will retry after 2.430518786s: missing components: kube-dns
	I1101 10:42:09.504326  385751 system_pods.go:86] 9 kube-system pods found
	I1101 10:42:09.504363  385751 system_pods.go:89] "calico-kube-controllers-59556d9b4c-6qfp9" [9cac41bd-4116-42e7-96fa-d2834e565f6d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1101 10:42:09.504376  385751 system_pods.go:89] "calico-node-xlwkq" [fb4131cb-434e-4d14-812b-82cf4105077c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1101 10:42:09.504386  385751 system_pods.go:89] "coredns-66bc5c9577-cxhsm" [5ff73b67-e932-4e0f-acf3-5820d0d718cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:42:09.504392  385751 system_pods.go:89] "etcd-calico-543676" [33778a59-e79d-4568-883a-aa66c29e455e] Running
	I1101 10:42:09.504400  385751 system_pods.go:89] "kube-apiserver-calico-543676" [363487b4-1089-4af8-9ddc-ee60ff70d0ca] Running
	I1101 10:42:09.504405  385751 system_pods.go:89] "kube-controller-manager-calico-543676" [1f02e6ca-7d7e-4607-a6f9-5c70ef65cd0b] Running
	I1101 10:42:09.504411  385751 system_pods.go:89] "kube-proxy-4f2v7" [d833ef37-9852-4c57-b168-227f701fe093] Running
	I1101 10:42:09.504417  385751 system_pods.go:89] "kube-scheduler-calico-543676" [e92217e0-526b-458f-af14-cf9fb0d1cfb0] Running
	I1101 10:42:09.504422  385751 system_pods.go:89] "storage-provisioner" [e7b87718-af81-4ff9-b1ce-abffe72c7810] Running
	I1101 10:42:09.504442  385751 retry.go:31] will retry after 2.628471959s: missing components: kube-dns
	I1101 10:42:12.137707  385751 system_pods.go:86] 9 kube-system pods found
	I1101 10:42:12.137750  385751 system_pods.go:89] "calico-kube-controllers-59556d9b4c-6qfp9" [9cac41bd-4116-42e7-96fa-d2834e565f6d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1101 10:42:12.137762  385751 system_pods.go:89] "calico-node-xlwkq" [fb4131cb-434e-4d14-812b-82cf4105077c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1101 10:42:12.137771  385751 system_pods.go:89] "coredns-66bc5c9577-cxhsm" [5ff73b67-e932-4e0f-acf3-5820d0d718cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:42:12.137777  385751 system_pods.go:89] "etcd-calico-543676" [33778a59-e79d-4568-883a-aa66c29e455e] Running
	I1101 10:42:12.137784  385751 system_pods.go:89] "kube-apiserver-calico-543676" [363487b4-1089-4af8-9ddc-ee60ff70d0ca] Running
	I1101 10:42:12.137789  385751 system_pods.go:89] "kube-controller-manager-calico-543676" [1f02e6ca-7d7e-4607-a6f9-5c70ef65cd0b] Running
	I1101 10:42:12.137794  385751 system_pods.go:89] "kube-proxy-4f2v7" [d833ef37-9852-4c57-b168-227f701fe093] Running
	I1101 10:42:12.137799  385751 system_pods.go:89] "kube-scheduler-calico-543676" [e92217e0-526b-458f-af14-cf9fb0d1cfb0] Running
	I1101 10:42:12.137804  385751 system_pods.go:89] "storage-provisioner" [e7b87718-af81-4ff9-b1ce-abffe72c7810] Running
	I1101 10:42:12.137822  385751 retry.go:31] will retry after 3.478911407s: missing components: kube-dns
	I1101 10:42:15.622983  385751 system_pods.go:86] 9 kube-system pods found
	I1101 10:42:15.623028  385751 system_pods.go:89] "calico-kube-controllers-59556d9b4c-6qfp9" [9cac41bd-4116-42e7-96fa-d2834e565f6d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1101 10:42:15.623046  385751 system_pods.go:89] "calico-node-xlwkq" [fb4131cb-434e-4d14-812b-82cf4105077c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1101 10:42:15.623058  385751 system_pods.go:89] "coredns-66bc5c9577-cxhsm" [5ff73b67-e932-4e0f-acf3-5820d0d718cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:42:15.623064  385751 system_pods.go:89] "etcd-calico-543676" [33778a59-e79d-4568-883a-aa66c29e455e] Running
	I1101 10:42:15.623071  385751 system_pods.go:89] "kube-apiserver-calico-543676" [363487b4-1089-4af8-9ddc-ee60ff70d0ca] Running
	I1101 10:42:15.623077  385751 system_pods.go:89] "kube-controller-manager-calico-543676" [1f02e6ca-7d7e-4607-a6f9-5c70ef65cd0b] Running
	I1101 10:42:15.623082  385751 system_pods.go:89] "kube-proxy-4f2v7" [d833ef37-9852-4c57-b168-227f701fe093] Running
	I1101 10:42:15.623087  385751 system_pods.go:89] "kube-scheduler-calico-543676" [e92217e0-526b-458f-af14-cf9fb0d1cfb0] Running
	I1101 10:42:15.623093  385751 system_pods.go:89] "storage-provisioner" [e7b87718-af81-4ff9-b1ce-abffe72c7810] Running
	I1101 10:42:15.623111  385751 retry.go:31] will retry after 5.085552873s: missing components: kube-dns
	I1101 10:42:20.714684  385751 system_pods.go:86] 9 kube-system pods found
	I1101 10:42:20.714714  385751 system_pods.go:89] "calico-kube-controllers-59556d9b4c-6qfp9" [9cac41bd-4116-42e7-96fa-d2834e565f6d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1101 10:42:20.714722  385751 system_pods.go:89] "calico-node-xlwkq" [fb4131cb-434e-4d14-812b-82cf4105077c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1101 10:42:20.714729  385751 system_pods.go:89] "coredns-66bc5c9577-cxhsm" [5ff73b67-e932-4e0f-acf3-5820d0d718cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:42:20.714732  385751 system_pods.go:89] "etcd-calico-543676" [33778a59-e79d-4568-883a-aa66c29e455e] Running
	I1101 10:42:20.714737  385751 system_pods.go:89] "kube-apiserver-calico-543676" [363487b4-1089-4af8-9ddc-ee60ff70d0ca] Running
	I1101 10:42:20.714740  385751 system_pods.go:89] "kube-controller-manager-calico-543676" [1f02e6ca-7d7e-4607-a6f9-5c70ef65cd0b] Running
	I1101 10:42:20.714743  385751 system_pods.go:89] "kube-proxy-4f2v7" [d833ef37-9852-4c57-b168-227f701fe093] Running
	I1101 10:42:20.714746  385751 system_pods.go:89] "kube-scheduler-calico-543676" [e92217e0-526b-458f-af14-cf9fb0d1cfb0] Running
	I1101 10:42:20.714750  385751 system_pods.go:89] "storage-provisioner" [e7b87718-af81-4ff9-b1ce-abffe72c7810] Running
	I1101 10:42:20.714768  385751 retry.go:31] will retry after 5.743513015s: missing components: kube-dns
	I1101 10:42:26.470181  385751 system_pods.go:86] 9 kube-system pods found
	I1101 10:42:26.470221  385751 system_pods.go:89] "calico-kube-controllers-59556d9b4c-6qfp9" [9cac41bd-4116-42e7-96fa-d2834e565f6d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1101 10:42:26.470234  385751 system_pods.go:89] "calico-node-xlwkq" [fb4131cb-434e-4d14-812b-82cf4105077c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1101 10:42:26.470245  385751 system_pods.go:89] "coredns-66bc5c9577-cxhsm" [5ff73b67-e932-4e0f-acf3-5820d0d718cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:42:26.470252  385751 system_pods.go:89] "etcd-calico-543676" [33778a59-e79d-4568-883a-aa66c29e455e] Running
	I1101 10:42:26.470259  385751 system_pods.go:89] "kube-apiserver-calico-543676" [363487b4-1089-4af8-9ddc-ee60ff70d0ca] Running
	I1101 10:42:26.470264  385751 system_pods.go:89] "kube-controller-manager-calico-543676" [1f02e6ca-7d7e-4607-a6f9-5c70ef65cd0b] Running
	I1101 10:42:26.470269  385751 system_pods.go:89] "kube-proxy-4f2v7" [d833ef37-9852-4c57-b168-227f701fe093] Running
	I1101 10:42:26.470273  385751 system_pods.go:89] "kube-scheduler-calico-543676" [e92217e0-526b-458f-af14-cf9fb0d1cfb0] Running
	I1101 10:42:26.470277  385751 system_pods.go:89] "storage-provisioner" [e7b87718-af81-4ff9-b1ce-abffe72c7810] Running
	I1101 10:42:26.470297  385751 retry.go:31] will retry after 6.25329336s: missing components: kube-dns
	I1101 10:42:32.728757  385751 system_pods.go:86] 9 kube-system pods found
	I1101 10:42:32.728795  385751 system_pods.go:89] "calico-kube-controllers-59556d9b4c-6qfp9" [9cac41bd-4116-42e7-96fa-d2834e565f6d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1101 10:42:32.728808  385751 system_pods.go:89] "calico-node-xlwkq" [fb4131cb-434e-4d14-812b-82cf4105077c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1101 10:42:32.728825  385751 system_pods.go:89] "coredns-66bc5c9577-cxhsm" [5ff73b67-e932-4e0f-acf3-5820d0d718cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:42:32.728832  385751 system_pods.go:89] "etcd-calico-543676" [33778a59-e79d-4568-883a-aa66c29e455e] Running
	I1101 10:42:32.728841  385751 system_pods.go:89] "kube-apiserver-calico-543676" [363487b4-1089-4af8-9ddc-ee60ff70d0ca] Running
	I1101 10:42:32.728846  385751 system_pods.go:89] "kube-controller-manager-calico-543676" [1f02e6ca-7d7e-4607-a6f9-5c70ef65cd0b] Running
	I1101 10:42:32.728854  385751 system_pods.go:89] "kube-proxy-4f2v7" [d833ef37-9852-4c57-b168-227f701fe093] Running
	I1101 10:42:32.728857  385751 system_pods.go:89] "kube-scheduler-calico-543676" [e92217e0-526b-458f-af14-cf9fb0d1cfb0] Running
	I1101 10:42:32.728872  385751 system_pods.go:89] "storage-provisioner" [e7b87718-af81-4ff9-b1ce-abffe72c7810] Running
	I1101 10:42:32.728889  385751 retry.go:31] will retry after 9.563488004s: missing components: kube-dns
	I1101 10:42:42.301892  385751 system_pods.go:86] 9 kube-system pods found
	I1101 10:42:42.301939  385751 system_pods.go:89] "calico-kube-controllers-59556d9b4c-6qfp9" [9cac41bd-4116-42e7-96fa-d2834e565f6d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1101 10:42:42.301957  385751 system_pods.go:89] "calico-node-xlwkq" [fb4131cb-434e-4d14-812b-82cf4105077c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1101 10:42:42.301976  385751 system_pods.go:89] "coredns-66bc5c9577-cxhsm" [5ff73b67-e932-4e0f-acf3-5820d0d718cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:42:42.301983  385751 system_pods.go:89] "etcd-calico-543676" [33778a59-e79d-4568-883a-aa66c29e455e] Running
	I1101 10:42:42.301991  385751 system_pods.go:89] "kube-apiserver-calico-543676" [363487b4-1089-4af8-9ddc-ee60ff70d0ca] Running
	I1101 10:42:42.301999  385751 system_pods.go:89] "kube-controller-manager-calico-543676" [1f02e6ca-7d7e-4607-a6f9-5c70ef65cd0b] Running
	I1101 10:42:42.302004  385751 system_pods.go:89] "kube-proxy-4f2v7" [d833ef37-9852-4c57-b168-227f701fe093] Running
	I1101 10:42:42.302009  385751 system_pods.go:89] "kube-scheduler-calico-543676" [e92217e0-526b-458f-af14-cf9fb0d1cfb0] Running
	I1101 10:42:42.302018  385751 system_pods.go:89] "storage-provisioner" [e7b87718-af81-4ff9-b1ce-abffe72c7810] Running
	I1101 10:42:42.302037  385751 retry.go:31] will retry after 9.217356509s: missing components: kube-dns
	I1101 10:42:51.527746  385751 system_pods.go:86] 9 kube-system pods found
	I1101 10:42:51.527797  385751 system_pods.go:89] "calico-kube-controllers-59556d9b4c-6qfp9" [9cac41bd-4116-42e7-96fa-d2834e565f6d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1101 10:42:51.527813  385751 system_pods.go:89] "calico-node-xlwkq" [fb4131cb-434e-4d14-812b-82cf4105077c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1101 10:42:51.527824  385751 system_pods.go:89] "coredns-66bc5c9577-cxhsm" [5ff73b67-e932-4e0f-acf3-5820d0d718cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:42:51.527830  385751 system_pods.go:89] "etcd-calico-543676" [33778a59-e79d-4568-883a-aa66c29e455e] Running
	I1101 10:42:51.527839  385751 system_pods.go:89] "kube-apiserver-calico-543676" [363487b4-1089-4af8-9ddc-ee60ff70d0ca] Running
	I1101 10:42:51.527845  385751 system_pods.go:89] "kube-controller-manager-calico-543676" [1f02e6ca-7d7e-4607-a6f9-5c70ef65cd0b] Running
	I1101 10:42:51.527851  385751 system_pods.go:89] "kube-proxy-4f2v7" [d833ef37-9852-4c57-b168-227f701fe093] Running
	I1101 10:42:51.527857  385751 system_pods.go:89] "kube-scheduler-calico-543676" [e92217e0-526b-458f-af14-cf9fb0d1cfb0] Running
	I1101 10:42:51.527884  385751 system_pods.go:89] "storage-provisioner" [e7b87718-af81-4ff9-b1ce-abffe72c7810] Running
	I1101 10:42:51.527917  385751 retry.go:31] will retry after 16.369823293s: missing components: kube-dns
	I1101 10:43:07.903514  385751 system_pods.go:86] 9 kube-system pods found
	I1101 10:43:07.903544  385751 system_pods.go:89] "calico-kube-controllers-59556d9b4c-6qfp9" [9cac41bd-4116-42e7-96fa-d2834e565f6d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1101 10:43:07.903553  385751 system_pods.go:89] "calico-node-xlwkq" [fb4131cb-434e-4d14-812b-82cf4105077c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1101 10:43:07.903561  385751 system_pods.go:89] "coredns-66bc5c9577-cxhsm" [5ff73b67-e932-4e0f-acf3-5820d0d718cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:43:07.903566  385751 system_pods.go:89] "etcd-calico-543676" [33778a59-e79d-4568-883a-aa66c29e455e] Running
	I1101 10:43:07.903575  385751 system_pods.go:89] "kube-apiserver-calico-543676" [363487b4-1089-4af8-9ddc-ee60ff70d0ca] Running
	I1101 10:43:07.903579  385751 system_pods.go:89] "kube-controller-manager-calico-543676" [1f02e6ca-7d7e-4607-a6f9-5c70ef65cd0b] Running
	I1101 10:43:07.903584  385751 system_pods.go:89] "kube-proxy-4f2v7" [d833ef37-9852-4c57-b168-227f701fe093] Running
	I1101 10:43:07.903589  385751 system_pods.go:89] "kube-scheduler-calico-543676" [e92217e0-526b-458f-af14-cf9fb0d1cfb0] Running
	I1101 10:43:07.903600  385751 system_pods.go:89] "storage-provisioner" [e7b87718-af81-4ff9-b1ce-abffe72c7810] Running
	I1101 10:43:07.903618  385751 retry.go:31] will retry after 19.980135978s: missing components: kube-dns
	I1101 10:43:27.890406  385751 system_pods.go:86] 9 kube-system pods found
	I1101 10:43:27.890456  385751 system_pods.go:89] "calico-kube-controllers-59556d9b4c-6qfp9" [9cac41bd-4116-42e7-96fa-d2834e565f6d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1101 10:43:27.890479  385751 system_pods.go:89] "calico-node-xlwkq" [fb4131cb-434e-4d14-812b-82cf4105077c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1101 10:43:27.890490  385751 system_pods.go:89] "coredns-66bc5c9577-cxhsm" [5ff73b67-e932-4e0f-acf3-5820d0d718cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:43:27.890500  385751 system_pods.go:89] "etcd-calico-543676" [33778a59-e79d-4568-883a-aa66c29e455e] Running
	I1101 10:43:27.890510  385751 system_pods.go:89] "kube-apiserver-calico-543676" [363487b4-1089-4af8-9ddc-ee60ff70d0ca] Running
	I1101 10:43:27.890516  385751 system_pods.go:89] "kube-controller-manager-calico-543676" [1f02e6ca-7d7e-4607-a6f9-5c70ef65cd0b] Running
	I1101 10:43:27.890526  385751 system_pods.go:89] "kube-proxy-4f2v7" [d833ef37-9852-4c57-b168-227f701fe093] Running
	I1101 10:43:27.890531  385751 system_pods.go:89] "kube-scheduler-calico-543676" [e92217e0-526b-458f-af14-cf9fb0d1cfb0] Running
	I1101 10:43:27.890543  385751 system_pods.go:89] "storage-provisioner" [e7b87718-af81-4ff9-b1ce-abffe72c7810] Running
	I1101 10:43:27.890569  385751 retry.go:31] will retry after 21.000375785s: missing components: kube-dns
	I1101 10:43:48.899729  385751 system_pods.go:86] 9 kube-system pods found
	I1101 10:43:48.899770  385751 system_pods.go:89] "calico-kube-controllers-59556d9b4c-6qfp9" [9cac41bd-4116-42e7-96fa-d2834e565f6d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1101 10:43:48.899785  385751 system_pods.go:89] "calico-node-xlwkq" [fb4131cb-434e-4d14-812b-82cf4105077c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1101 10:43:48.899800  385751 system_pods.go:89] "coredns-66bc5c9577-cxhsm" [5ff73b67-e932-4e0f-acf3-5820d0d718cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:43:48.899811  385751 system_pods.go:89] "etcd-calico-543676" [33778a59-e79d-4568-883a-aa66c29e455e] Running
	I1101 10:43:48.899819  385751 system_pods.go:89] "kube-apiserver-calico-543676" [363487b4-1089-4af8-9ddc-ee60ff70d0ca] Running
	I1101 10:43:48.899825  385751 system_pods.go:89] "kube-controller-manager-calico-543676" [1f02e6ca-7d7e-4607-a6f9-5c70ef65cd0b] Running
	I1101 10:43:48.899835  385751 system_pods.go:89] "kube-proxy-4f2v7" [d833ef37-9852-4c57-b168-227f701fe093] Running
	I1101 10:43:48.899841  385751 system_pods.go:89] "kube-scheduler-calico-543676" [e92217e0-526b-458f-af14-cf9fb0d1cfb0] Running
	I1101 10:43:48.899847  385751 system_pods.go:89] "storage-provisioner" [e7b87718-af81-4ff9-b1ce-abffe72c7810] Running
	I1101 10:43:48.899882  385751 retry.go:31] will retry after 24.82537742s: missing components: kube-dns
	I1101 10:44:13.729971  385751 system_pods.go:86] 9 kube-system pods found
	I1101 10:44:13.730020  385751 system_pods.go:89] "calico-kube-controllers-59556d9b4c-6qfp9" [9cac41bd-4116-42e7-96fa-d2834e565f6d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1101 10:44:13.730038  385751 system_pods.go:89] "calico-node-xlwkq" [fb4131cb-434e-4d14-812b-82cf4105077c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1101 10:44:13.730048  385751 system_pods.go:89] "coredns-66bc5c9577-cxhsm" [5ff73b67-e932-4e0f-acf3-5820d0d718cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:44:13.730060  385751 system_pods.go:89] "etcd-calico-543676" [33778a59-e79d-4568-883a-aa66c29e455e] Running
	I1101 10:44:13.730070  385751 system_pods.go:89] "kube-apiserver-calico-543676" [363487b4-1089-4af8-9ddc-ee60ff70d0ca] Running
	I1101 10:44:13.730079  385751 system_pods.go:89] "kube-controller-manager-calico-543676" [1f02e6ca-7d7e-4607-a6f9-5c70ef65cd0b] Running
	I1101 10:44:13.730086  385751 system_pods.go:89] "kube-proxy-4f2v7" [d833ef37-9852-4c57-b168-227f701fe093] Running
	I1101 10:44:13.730092  385751 system_pods.go:89] "kube-scheduler-calico-543676" [e92217e0-526b-458f-af14-cf9fb0d1cfb0] Running
	I1101 10:44:13.730095  385751 system_pods.go:89] "storage-provisioner" [e7b87718-af81-4ff9-b1ce-abffe72c7810] Running
	I1101 10:44:13.730109  385751 retry.go:31] will retry after 26.452301086s: missing components: kube-dns
	I1101 10:44:40.187372  385751 system_pods.go:86] 9 kube-system pods found
	I1101 10:44:40.187417  385751 system_pods.go:89] "calico-kube-controllers-59556d9b4c-6qfp9" [9cac41bd-4116-42e7-96fa-d2834e565f6d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1101 10:44:40.187432  385751 system_pods.go:89] "calico-node-xlwkq" [fb4131cb-434e-4d14-812b-82cf4105077c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1101 10:44:40.187441  385751 system_pods.go:89] "coredns-66bc5c9577-cxhsm" [5ff73b67-e932-4e0f-acf3-5820d0d718cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:44:40.187448  385751 system_pods.go:89] "etcd-calico-543676" [33778a59-e79d-4568-883a-aa66c29e455e] Running
	I1101 10:44:40.187465  385751 system_pods.go:89] "kube-apiserver-calico-543676" [363487b4-1089-4af8-9ddc-ee60ff70d0ca] Running
	I1101 10:44:40.187474  385751 system_pods.go:89] "kube-controller-manager-calico-543676" [1f02e6ca-7d7e-4607-a6f9-5c70ef65cd0b] Running
	I1101 10:44:40.187479  385751 system_pods.go:89] "kube-proxy-4f2v7" [d833ef37-9852-4c57-b168-227f701fe093] Running
	I1101 10:44:40.187482  385751 system_pods.go:89] "kube-scheduler-calico-543676" [e92217e0-526b-458f-af14-cf9fb0d1cfb0] Running
	I1101 10:44:40.187485  385751 system_pods.go:89] "storage-provisioner" [e7b87718-af81-4ff9-b1ce-abffe72c7810] Running
	I1101 10:44:40.187508  385751 retry.go:31] will retry after 42.667828193s: missing components: kube-dns
	I1101 10:45:22.862123  385751 system_pods.go:86] 9 kube-system pods found
	I1101 10:45:22.862160  385751 system_pods.go:89] "calico-kube-controllers-59556d9b4c-6qfp9" [9cac41bd-4116-42e7-96fa-d2834e565f6d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1101 10:45:22.862169  385751 system_pods.go:89] "calico-node-xlwkq" [fb4131cb-434e-4d14-812b-82cf4105077c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1101 10:45:22.862177  385751 system_pods.go:89] "coredns-66bc5c9577-cxhsm" [5ff73b67-e932-4e0f-acf3-5820d0d718cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:45:22.862180  385751 system_pods.go:89] "etcd-calico-543676" [33778a59-e79d-4568-883a-aa66c29e455e] Running
	I1101 10:45:22.862185  385751 system_pods.go:89] "kube-apiserver-calico-543676" [363487b4-1089-4af8-9ddc-ee60ff70d0ca] Running
	I1101 10:45:22.862188  385751 system_pods.go:89] "kube-controller-manager-calico-543676" [1f02e6ca-7d7e-4607-a6f9-5c70ef65cd0b] Running
	I1101 10:45:22.862191  385751 system_pods.go:89] "kube-proxy-4f2v7" [d833ef37-9852-4c57-b168-227f701fe093] Running
	I1101 10:45:22.862194  385751 system_pods.go:89] "kube-scheduler-calico-543676" [e92217e0-526b-458f-af14-cf9fb0d1cfb0] Running
	I1101 10:45:22.862197  385751 system_pods.go:89] "storage-provisioner" [e7b87718-af81-4ff9-b1ce-abffe72c7810] Running
	I1101 10:45:22.862214  385751 retry.go:31] will retry after 50.14620072s: missing components: kube-dns
	I1101 10:46:13.015377  385751 system_pods.go:86] 9 kube-system pods found
	I1101 10:46:13.015422  385751 system_pods.go:89] "calico-kube-controllers-59556d9b4c-6qfp9" [9cac41bd-4116-42e7-96fa-d2834e565f6d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1101 10:46:13.015432  385751 system_pods.go:89] "calico-node-xlwkq" [fb4131cb-434e-4d14-812b-82cf4105077c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1101 10:46:13.015439  385751 system_pods.go:89] "coredns-66bc5c9577-cxhsm" [5ff73b67-e932-4e0f-acf3-5820d0d718cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:46:13.015443  385751 system_pods.go:89] "etcd-calico-543676" [33778a59-e79d-4568-883a-aa66c29e455e] Running
	I1101 10:46:13.015448  385751 system_pods.go:89] "kube-apiserver-calico-543676" [363487b4-1089-4af8-9ddc-ee60ff70d0ca] Running
	I1101 10:46:13.015451  385751 system_pods.go:89] "kube-controller-manager-calico-543676" [1f02e6ca-7d7e-4607-a6f9-5c70ef65cd0b] Running
	I1101 10:46:13.015456  385751 system_pods.go:89] "kube-proxy-4f2v7" [d833ef37-9852-4c57-b168-227f701fe093] Running
	I1101 10:46:13.015459  385751 system_pods.go:89] "kube-scheduler-calico-543676" [e92217e0-526b-458f-af14-cf9fb0d1cfb0] Running
	I1101 10:46:13.015461  385751 system_pods.go:89] "storage-provisioner" [e7b87718-af81-4ff9-b1ce-abffe72c7810] Running
	I1101 10:46:13.015480  385751 retry.go:31] will retry after 1m14.656905436s: missing components: kube-dns
	I1101 10:47:27.678594  385751 system_pods.go:86] 9 kube-system pods found
	I1101 10:47:27.678710  385751 system_pods.go:89] "calico-kube-controllers-59556d9b4c-6qfp9" [9cac41bd-4116-42e7-96fa-d2834e565f6d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1101 10:47:27.678719  385751 system_pods.go:89] "calico-node-xlwkq" [fb4131cb-434e-4d14-812b-82cf4105077c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1101 10:47:27.678725  385751 system_pods.go:89] "coredns-66bc5c9577-cxhsm" [5ff73b67-e932-4e0f-acf3-5820d0d718cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:47:27.678729  385751 system_pods.go:89] "etcd-calico-543676" [33778a59-e79d-4568-883a-aa66c29e455e] Running
	I1101 10:47:27.678733  385751 system_pods.go:89] "kube-apiserver-calico-543676" [363487b4-1089-4af8-9ddc-ee60ff70d0ca] Running
	I1101 10:47:27.678736  385751 system_pods.go:89] "kube-controller-manager-calico-543676" [1f02e6ca-7d7e-4607-a6f9-5c70ef65cd0b] Running
	I1101 10:47:27.678739  385751 system_pods.go:89] "kube-proxy-4f2v7" [d833ef37-9852-4c57-b168-227f701fe093] Running
	I1101 10:47:27.678742  385751 system_pods.go:89] "kube-scheduler-calico-543676" [e92217e0-526b-458f-af14-cf9fb0d1cfb0] Running
	I1101 10:47:27.678745  385751 system_pods.go:89] "storage-provisioner" [e7b87718-af81-4ff9-b1ce-abffe72c7810] Running
	I1101 10:47:27.678763  385751 retry.go:31] will retry after 1m9.653539738s: missing components: kube-dns
	I1101 10:48:37.338596  385751 system_pods.go:86] 9 kube-system pods found
	I1101 10:48:37.338635  385751 system_pods.go:89] "calico-kube-controllers-59556d9b4c-6qfp9" [9cac41bd-4116-42e7-96fa-d2834e565f6d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1101 10:48:37.338645  385751 system_pods.go:89] "calico-node-xlwkq" [fb4131cb-434e-4d14-812b-82cf4105077c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1101 10:48:37.338652  385751 system_pods.go:89] "coredns-66bc5c9577-cxhsm" [5ff73b67-e932-4e0f-acf3-5820d0d718cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:48:37.338656  385751 system_pods.go:89] "etcd-calico-543676" [33778a59-e79d-4568-883a-aa66c29e455e] Running
	I1101 10:48:37.338662  385751 system_pods.go:89] "kube-apiserver-calico-543676" [363487b4-1089-4af8-9ddc-ee60ff70d0ca] Running
	I1101 10:48:37.338666  385751 system_pods.go:89] "kube-controller-manager-calico-543676" [1f02e6ca-7d7e-4607-a6f9-5c70ef65cd0b] Running
	I1101 10:48:37.338670  385751 system_pods.go:89] "kube-proxy-4f2v7" [d833ef37-9852-4c57-b168-227f701fe093] Running
	I1101 10:48:37.338673  385751 system_pods.go:89] "kube-scheduler-calico-543676" [e92217e0-526b-458f-af14-cf9fb0d1cfb0] Running
	I1101 10:48:37.338676  385751 system_pods.go:89] "storage-provisioner" [e7b87718-af81-4ff9-b1ce-abffe72c7810] Running
	I1101 10:48:37.338695  385751 retry.go:31] will retry after 48.672872887s: missing components: kube-dns
	I1101 10:49:26.017199  385751 system_pods.go:86] 9 kube-system pods found
	I1101 10:49:26.017235  385751 system_pods.go:89] "calico-kube-controllers-59556d9b4c-6qfp9" [9cac41bd-4116-42e7-96fa-d2834e565f6d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1101 10:49:26.017246  385751 system_pods.go:89] "calico-node-xlwkq" [fb4131cb-434e-4d14-812b-82cf4105077c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1101 10:49:26.017253  385751 system_pods.go:89] "coredns-66bc5c9577-cxhsm" [5ff73b67-e932-4e0f-acf3-5820d0d718cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:49:26.017257  385751 system_pods.go:89] "etcd-calico-543676" [33778a59-e79d-4568-883a-aa66c29e455e] Running
	I1101 10:49:26.017262  385751 system_pods.go:89] "kube-apiserver-calico-543676" [363487b4-1089-4af8-9ddc-ee60ff70d0ca] Running
	I1101 10:49:26.017265  385751 system_pods.go:89] "kube-controller-manager-calico-543676" [1f02e6ca-7d7e-4607-a6f9-5c70ef65cd0b] Running
	I1101 10:49:26.017268  385751 system_pods.go:89] "kube-proxy-4f2v7" [d833ef37-9852-4c57-b168-227f701fe093] Running
	I1101 10:49:26.017272  385751 system_pods.go:89] "kube-scheduler-calico-543676" [e92217e0-526b-458f-af14-cf9fb0d1cfb0] Running
	I1101 10:49:26.017275  385751 system_pods.go:89] "storage-provisioner" [e7b87718-af81-4ff9-b1ce-abffe72c7810] Running
	I1101 10:49:26.017290  385751 retry.go:31] will retry after 1m9.310105626s: missing components: kube-dns
	I1101 10:50:35.336667  385751 system_pods.go:86] 9 kube-system pods found
	I1101 10:50:35.336705  385751 system_pods.go:89] "calico-kube-controllers-59556d9b4c-6qfp9" [9cac41bd-4116-42e7-96fa-d2834e565f6d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1101 10:50:35.336715  385751 system_pods.go:89] "calico-node-xlwkq" [fb4131cb-434e-4d14-812b-82cf4105077c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1101 10:50:35.336721  385751 system_pods.go:89] "coredns-66bc5c9577-cxhsm" [5ff73b67-e932-4e0f-acf3-5820d0d718cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:50:35.336726  385751 system_pods.go:89] "etcd-calico-543676" [33778a59-e79d-4568-883a-aa66c29e455e] Running
	I1101 10:50:35.336732  385751 system_pods.go:89] "kube-apiserver-calico-543676" [363487b4-1089-4af8-9ddc-ee60ff70d0ca] Running
	I1101 10:50:35.336735  385751 system_pods.go:89] "kube-controller-manager-calico-543676" [1f02e6ca-7d7e-4607-a6f9-5c70ef65cd0b] Running
	I1101 10:50:35.336739  385751 system_pods.go:89] "kube-proxy-4f2v7" [d833ef37-9852-4c57-b168-227f701fe093] Running
	I1101 10:50:35.336742  385751 system_pods.go:89] "kube-scheduler-calico-543676" [e92217e0-526b-458f-af14-cf9fb0d1cfb0] Running
	I1101 10:50:35.336745  385751 system_pods.go:89] "storage-provisioner" [e7b87718-af81-4ff9-b1ce-abffe72c7810] Running
	I1101 10:50:35.336761  385751 retry.go:31] will retry after 1m7.588137213s: missing components: kube-dns
	I1101 10:51:42.930355  385751 system_pods.go:86] 9 kube-system pods found
	I1101 10:51:42.930405  385751 system_pods.go:89] "calico-kube-controllers-59556d9b4c-6qfp9" [9cac41bd-4116-42e7-96fa-d2834e565f6d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1101 10:51:42.930421  385751 system_pods.go:89] "calico-node-xlwkq" [fb4131cb-434e-4d14-812b-82cf4105077c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1101 10:51:42.930428  385751 system_pods.go:89] "coredns-66bc5c9577-cxhsm" [5ff73b67-e932-4e0f-acf3-5820d0d718cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:51:42.930432  385751 system_pods.go:89] "etcd-calico-543676" [33778a59-e79d-4568-883a-aa66c29e455e] Running
	I1101 10:51:42.930436  385751 system_pods.go:89] "kube-apiserver-calico-543676" [363487b4-1089-4af8-9ddc-ee60ff70d0ca] Running
	I1101 10:51:42.930440  385751 system_pods.go:89] "kube-controller-manager-calico-543676" [1f02e6ca-7d7e-4607-a6f9-5c70ef65cd0b] Running
	I1101 10:51:42.930447  385751 system_pods.go:89] "kube-proxy-4f2v7" [d833ef37-9852-4c57-b168-227f701fe093] Running
	I1101 10:51:42.930450  385751 system_pods.go:89] "kube-scheduler-calico-543676" [e92217e0-526b-458f-af14-cf9fb0d1cfb0] Running
	I1101 10:51:42.930453  385751 system_pods.go:89] "storage-provisioner" [e7b87718-af81-4ff9-b1ce-abffe72c7810] Running
	I1101 10:51:42.930470  385751 retry.go:31] will retry after 55.645584374s: missing components: kube-dns
	I1101 10:52:38.582941  385751 system_pods.go:86] 9 kube-system pods found
	I1101 10:52:38.582979  385751 system_pods.go:89] "calico-kube-controllers-59556d9b4c-6qfp9" [9cac41bd-4116-42e7-96fa-d2834e565f6d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1101 10:52:38.583017  385751 system_pods.go:89] "calico-node-xlwkq" [fb4131cb-434e-4d14-812b-82cf4105077c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1101 10:52:38.583025  385751 system_pods.go:89] "coredns-66bc5c9577-cxhsm" [5ff73b67-e932-4e0f-acf3-5820d0d718cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:52:38.583029  385751 system_pods.go:89] "etcd-calico-543676" [33778a59-e79d-4568-883a-aa66c29e455e] Running
	I1101 10:52:38.583035  385751 system_pods.go:89] "kube-apiserver-calico-543676" [363487b4-1089-4af8-9ddc-ee60ff70d0ca] Running
	I1101 10:52:38.583039  385751 system_pods.go:89] "kube-controller-manager-calico-543676" [1f02e6ca-7d7e-4607-a6f9-5c70ef65cd0b] Running
	I1101 10:52:38.583042  385751 system_pods.go:89] "kube-proxy-4f2v7" [d833ef37-9852-4c57-b168-227f701fe093] Running
	I1101 10:52:38.583046  385751 system_pods.go:89] "kube-scheduler-calico-543676" [e92217e0-526b-458f-af14-cf9fb0d1cfb0] Running
	I1101 10:52:38.583049  385751 system_pods.go:89] "storage-provisioner" [e7b87718-af81-4ff9-b1ce-abffe72c7810] Running
	I1101 10:52:38.583067  385751 retry.go:31] will retry after 52.103982067s: missing components: kube-dns
	I1101 10:53:30.692170  385751 system_pods.go:86] 9 kube-system pods found
	I1101 10:53:30.692206  385751 system_pods.go:89] "calico-kube-controllers-59556d9b4c-6qfp9" [9cac41bd-4116-42e7-96fa-d2834e565f6d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1101 10:53:30.692216  385751 system_pods.go:89] "calico-node-xlwkq" [fb4131cb-434e-4d14-812b-82cf4105077c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1101 10:53:30.692223  385751 system_pods.go:89] "coredns-66bc5c9577-cxhsm" [5ff73b67-e932-4e0f-acf3-5820d0d718cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:53:30.692227  385751 system_pods.go:89] "etcd-calico-543676" [33778a59-e79d-4568-883a-aa66c29e455e] Running
	I1101 10:53:30.692232  385751 system_pods.go:89] "kube-apiserver-calico-543676" [363487b4-1089-4af8-9ddc-ee60ff70d0ca] Running
	I1101 10:53:30.692235  385751 system_pods.go:89] "kube-controller-manager-calico-543676" [1f02e6ca-7d7e-4607-a6f9-5c70ef65cd0b] Running
	I1101 10:53:30.692238  385751 system_pods.go:89] "kube-proxy-4f2v7" [d833ef37-9852-4c57-b168-227f701fe093] Running
	I1101 10:53:30.692243  385751 system_pods.go:89] "kube-scheduler-calico-543676" [e92217e0-526b-458f-af14-cf9fb0d1cfb0] Running
	I1101 10:53:30.692245  385751 system_pods.go:89] "storage-provisioner" [e7b87718-af81-4ff9-b1ce-abffe72c7810] Running
	I1101 10:53:30.692260  385751 retry.go:31] will retry after 50.293474225s: missing components: kube-dns
	I1101 10:54:20.991019  385751 system_pods.go:86] 9 kube-system pods found
	I1101 10:54:20.991128  385751 system_pods.go:89] "calico-kube-controllers-59556d9b4c-6qfp9" [9cac41bd-4116-42e7-96fa-d2834e565f6d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1101 10:54:20.991155  385751 system_pods.go:89] "calico-node-xlwkq" [fb4131cb-434e-4d14-812b-82cf4105077c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1101 10:54:20.991170  385751 system_pods.go:89] "coredns-66bc5c9577-cxhsm" [5ff73b67-e932-4e0f-acf3-5820d0d718cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:54:20.991178  385751 system_pods.go:89] "etcd-calico-543676" [33778a59-e79d-4568-883a-aa66c29e455e] Running
	I1101 10:54:20.991186  385751 system_pods.go:89] "kube-apiserver-calico-543676" [363487b4-1089-4af8-9ddc-ee60ff70d0ca] Running
	I1101 10:54:20.991192  385751 system_pods.go:89] "kube-controller-manager-calico-543676" [1f02e6ca-7d7e-4607-a6f9-5c70ef65cd0b] Running
	I1101 10:54:20.991200  385751 system_pods.go:89] "kube-proxy-4f2v7" [d833ef37-9852-4c57-b168-227f701fe093] Running
	I1101 10:54:20.991205  385751 system_pods.go:89] "kube-scheduler-calico-543676" [e92217e0-526b-458f-af14-cf9fb0d1cfb0] Running
	I1101 10:54:20.991211  385751 system_pods.go:89] "storage-provisioner" [e7b87718-af81-4ff9-b1ce-abffe72c7810] Running
	I1101 10:54:20.991232  385751 retry.go:31] will retry after 1m1.55340771s: missing components: kube-dns
	I1101 10:55:22.550128  385751 system_pods.go:86] 9 kube-system pods found
	I1101 10:55:22.550162  385751 system_pods.go:89] "calico-kube-controllers-59556d9b4c-6qfp9" [9cac41bd-4116-42e7-96fa-d2834e565f6d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1101 10:55:22.550171  385751 system_pods.go:89] "calico-node-xlwkq" [fb4131cb-434e-4d14-812b-82cf4105077c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1101 10:55:22.550177  385751 system_pods.go:89] "coredns-66bc5c9577-cxhsm" [5ff73b67-e932-4e0f-acf3-5820d0d718cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:55:22.550181  385751 system_pods.go:89] "etcd-calico-543676" [33778a59-e79d-4568-883a-aa66c29e455e] Running
	I1101 10:55:22.550187  385751 system_pods.go:89] "kube-apiserver-calico-543676" [363487b4-1089-4af8-9ddc-ee60ff70d0ca] Running
	I1101 10:55:22.550190  385751 system_pods.go:89] "kube-controller-manager-calico-543676" [1f02e6ca-7d7e-4607-a6f9-5c70ef65cd0b] Running
	I1101 10:55:22.550193  385751 system_pods.go:89] "kube-proxy-4f2v7" [d833ef37-9852-4c57-b168-227f701fe093] Running
	I1101 10:55:22.550196  385751 system_pods.go:89] "kube-scheduler-calico-543676" [e92217e0-526b-458f-af14-cf9fb0d1cfb0] Running
	I1101 10:55:22.550199  385751 system_pods.go:89] "storage-provisioner" [e7b87718-af81-4ff9-b1ce-abffe72c7810] Running
	I1101 10:55:22.550215  385751 retry.go:31] will retry after 1m6.369566363s: missing components: kube-dns
	I1101 10:56:28.925926  385751 system_pods.go:86] 9 kube-system pods found
	I1101 10:56:28.925967  385751 system_pods.go:89] "calico-kube-controllers-59556d9b4c-6qfp9" [9cac41bd-4116-42e7-96fa-d2834e565f6d] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1101 10:56:28.925977  385751 system_pods.go:89] "calico-node-xlwkq" [fb4131cb-434e-4d14-812b-82cf4105077c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1101 10:56:28.925987  385751 system_pods.go:89] "coredns-66bc5c9577-cxhsm" [5ff73b67-e932-4e0f-acf3-5820d0d718cf] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 10:56:28.925992  385751 system_pods.go:89] "etcd-calico-543676" [33778a59-e79d-4568-883a-aa66c29e455e] Running
	I1101 10:56:28.925999  385751 system_pods.go:89] "kube-apiserver-calico-543676" [363487b4-1089-4af8-9ddc-ee60ff70d0ca] Running
	I1101 10:56:28.926004  385751 system_pods.go:89] "kube-controller-manager-calico-543676" [1f02e6ca-7d7e-4607-a6f9-5c70ef65cd0b] Running
	I1101 10:56:28.926014  385751 system_pods.go:89] "kube-proxy-4f2v7" [d833ef37-9852-4c57-b168-227f701fe093] Running
	I1101 10:56:28.926018  385751 system_pods.go:89] "kube-scheduler-calico-543676" [e92217e0-526b-458f-af14-cf9fb0d1cfb0] Running
	I1101 10:56:28.926022  385751 system_pods.go:89] "storage-provisioner" [e7b87718-af81-4ff9-b1ce-abffe72c7810] Running
	I1101 10:56:28.928166  385751 out.go:203] 
	W1101 10:56:28.929410  385751 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
	X Exiting due to GUEST_START: failed to start node: wait 15m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
	W1101 10:56:28.929427  385751 out.go:285] * 
	* 
	W1101 10:56:28.931058  385751 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 10:56:28.932307  385751 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (928.42s)

                                                
                                    

Test pass (292/337)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6.24
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.17
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.16
12 TestDownloadOnly/v1.34.1/json-events 3.81
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.08
18 TestDownloadOnly/v1.34.1/DeleteAll 0.17
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.16
21 TestBinaryMirror 0.67
22 TestOffline 89.31
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 162.07
31 TestAddons/serial/GCPAuth/Namespaces 0.14
32 TestAddons/serial/GCPAuth/FakeCredentials 9.56
35 TestAddons/parallel/Registry 16.85
36 TestAddons/parallel/RegistryCreds 0.77
38 TestAddons/parallel/InspektorGadget 6.38
39 TestAddons/parallel/MetricsServer 6.11
41 TestAddons/parallel/CSI 68.5
42 TestAddons/parallel/Headlamp 21.98
43 TestAddons/parallel/CloudSpanner 7.05
44 TestAddons/parallel/LocalPath 56.02
45 TestAddons/parallel/NvidiaDevicePlugin 6.95
46 TestAddons/parallel/Yakd 12.51
48 TestAddons/StoppedEnableDisable 79.29
49 TestCertOptions 93.85
50 TestCertExpiration 307.14
52 TestForceSystemdFlag 74.43
53 TestForceSystemdEnv 71.65
58 TestErrorSpam/setup 39.72
59 TestErrorSpam/start 0.36
60 TestErrorSpam/status 0.71
61 TestErrorSpam/pause 1.62
62 TestErrorSpam/unpause 1.95
63 TestErrorSpam/stop 4.71
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 84.49
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 62.17
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.08
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.56
75 TestFunctional/serial/CacheCmd/cache/add_local 1.54
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
77 TestFunctional/serial/CacheCmd/cache/list 0.07
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.66
80 TestFunctional/serial/CacheCmd/cache/delete 0.14
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
83 TestFunctional/serial/ExtraConfig 35.79
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.66
86 TestFunctional/serial/LogsFileCmd 1.62
87 TestFunctional/serial/InvalidService 4.54
89 TestFunctional/parallel/ConfigCmd 0.46
90 TestFunctional/parallel/DashboardCmd 10.41
91 TestFunctional/parallel/DryRun 0.25
92 TestFunctional/parallel/InternationalLanguage 0.13
93 TestFunctional/parallel/StatusCmd 0.81
97 TestFunctional/parallel/ServiceCmdConnect 20.57
98 TestFunctional/parallel/AddonsCmd 0.17
101 TestFunctional/parallel/SSHCmd 0.36
102 TestFunctional/parallel/CpCmd 1.28
103 TestFunctional/parallel/MySQL 24.77
104 TestFunctional/parallel/FileSync 0.2
105 TestFunctional/parallel/CertSync 1.2
109 TestFunctional/parallel/NodeLabels 0.09
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.4
113 TestFunctional/parallel/License 0.69
114 TestFunctional/parallel/Version/short 0.08
115 TestFunctional/parallel/Version/components 0.79
116 TestFunctional/parallel/ImageCommands/ImageListShort 1.09
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.34
119 TestFunctional/parallel/ImageCommands/ImageListYaml 1.02
120 TestFunctional/parallel/ImageCommands/ImageBuild 2.49
121 TestFunctional/parallel/ImageCommands/Setup 1
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.64
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.28
136 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.88
137 TestFunctional/parallel/ImageCommands/ImageSaveToFile 8.1
138 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
139 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.97
140 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.64
141 TestFunctional/parallel/ServiceCmd/DeployApp 12.28
142 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
143 TestFunctional/parallel/ProfileCmd/profile_list 0.33
144 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
145 TestFunctional/parallel/MountCmd/any-port 7.03
146 TestFunctional/parallel/ServiceCmd/List 1.31
147 TestFunctional/parallel/ServiceCmd/JSONOutput 1.25
148 TestFunctional/parallel/MountCmd/specific-port 1.61
149 TestFunctional/parallel/ServiceCmd/HTTPS 0.33
150 TestFunctional/parallel/ServiceCmd/Format 0.34
151 TestFunctional/parallel/ServiceCmd/URL 0.35
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.31
153 TestFunctional/delete_echo-server_images 0.05
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 208.1
161 TestMultiControlPlane/serial/DeployApp 5.59
162 TestMultiControlPlane/serial/PingHostFromPods 1.42
163 TestMultiControlPlane/serial/AddWorkerNode 75.11
164 TestMultiControlPlane/serial/NodeLabels 0.07
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.75
166 TestMultiControlPlane/serial/CopyFile 11.46
167 TestMultiControlPlane/serial/StopSecondaryNode 88.42
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.52
169 TestMultiControlPlane/serial/RestartSecondaryNode 45.14
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.99
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 378.8
172 TestMultiControlPlane/serial/DeleteSecondaryNode 18.51
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.55
174 TestMultiControlPlane/serial/StopCluster 229.38
175 TestMultiControlPlane/serial/RestartCluster 107.55
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.53
177 TestMultiControlPlane/serial/AddSecondaryNode 93.53
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.73
183 TestJSONOutput/start/Command 89.01
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.78
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.66
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 7.15
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.24
211 TestMainNoArgs 0.06
212 TestMinikubeProfile 85.27
215 TestMountStart/serial/StartWithMountFirst 23.7
216 TestMountStart/serial/VerifyMountFirst 0.32
217 TestMountStart/serial/StartWithMountSecond 23.06
218 TestMountStart/serial/VerifyMountSecond 0.32
219 TestMountStart/serial/DeleteFirst 0.71
220 TestMountStart/serial/VerifyMountPostDelete 0.32
221 TestMountStart/serial/Stop 1.38
222 TestMountStart/serial/RestartStopped 21.01
223 TestMountStart/serial/VerifyMountPostStop 0.32
226 TestMultiNode/serial/FreshStart2Nodes 104.85
227 TestMultiNode/serial/DeployApp2Nodes 4.46
228 TestMultiNode/serial/PingHostFrom2Pods 0.91
229 TestMultiNode/serial/AddNode 42.22
230 TestMultiNode/serial/MultiNodeLabels 0.07
231 TestMultiNode/serial/ProfileList 0.48
232 TestMultiNode/serial/CopyFile 6.33
233 TestMultiNode/serial/StopNode 2.54
234 TestMultiNode/serial/StartAfterStop 46.44
235 TestMultiNode/serial/RestartKeepsNodes 306.41
236 TestMultiNode/serial/DeleteNode 2.7
237 TestMultiNode/serial/StopMultiNode 159.09
238 TestMultiNode/serial/RestartMultiNode 119.47
239 TestMultiNode/serial/ValidateNameConflict 44.85
246 TestScheduledStopUnix 114.23
250 TestRunningBinaryUpgrade 114.11
252 TestKubernetesUpgrade 200.59
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
256 TestStoppedBinaryUpgrade/Setup 0.55
257 TestNoKubernetes/serial/StartWithK8s 88.97
258 TestStoppedBinaryUpgrade/Upgrade 168.02
259 TestNoKubernetes/serial/StartWithStopK8s 30.04
260 TestNoKubernetes/serial/Start 53.03
261 TestStoppedBinaryUpgrade/MinikubeLogs 1.39
270 TestPause/serial/Start 97.96
271 TestNoKubernetes/serial/VerifyK8sNotRunning 0.17
272 TestNoKubernetes/serial/ProfileList 1.05
273 TestNoKubernetes/serial/Stop 1.35
274 TestNoKubernetes/serial/StartNoArgs 53.96
282 TestNetworkPlugins/group/false 5.43
286 TestISOImage/Setup 53.43
287 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
290 TestISOImage/Binaries/crictl 0.22
291 TestISOImage/Binaries/curl 0.18
292 TestISOImage/Binaries/docker 0.18
293 TestISOImage/Binaries/git 0.19
294 TestISOImage/Binaries/iptables 0.19
295 TestISOImage/Binaries/podman 0.18
296 TestISOImage/Binaries/rsync 0.18
297 TestISOImage/Binaries/socat 0.18
298 TestISOImage/Binaries/wget 0.2
299 TestISOImage/Binaries/VBoxControl 0.18
300 TestISOImage/Binaries/VBoxService 0.18
302 TestStartStop/group/old-k8s-version/serial/FirstStart 95.9
304 TestStartStop/group/no-preload/serial/FirstStart 117.39
306 TestStartStop/group/embed-certs/serial/FirstStart 107.09
307 TestStartStop/group/old-k8s-version/serial/DeployApp 8.35
308 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.18
309 TestStartStop/group/old-k8s-version/serial/Stop 87.95
310 TestStartStop/group/no-preload/serial/DeployApp 9.32
311 TestStartStop/group/embed-certs/serial/DeployApp 8.34
312 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.13
313 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.18
314 TestStartStop/group/no-preload/serial/Stop 83.29
315 TestStartStop/group/embed-certs/serial/Stop 85.8
317 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 90.31
318 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.16
319 TestStartStop/group/old-k8s-version/serial/SecondStart 121.3
320 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.16
321 TestStartStop/group/no-preload/serial/SecondStart 76.94
322 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
323 TestStartStop/group/embed-certs/serial/SecondStart 80.32
324 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.46
325 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.61
326 TestStartStop/group/default-k8s-diff-port/serial/Stop 89.17
327 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
328 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.1
329 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 10.01
330 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
331 TestStartStop/group/no-preload/serial/Pause 2.83
333 TestStartStop/group/newest-cni/serial/FirstStart 48.97
334 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
335 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
336 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
337 TestStartStop/group/embed-certs/serial/Pause 3.05
338 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
339 TestNetworkPlugins/group/auto/Start 92.87
340 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
341 TestStartStop/group/old-k8s-version/serial/Pause 2.79
342 TestNetworkPlugins/group/kindnet/Start 89.83
343 TestStartStop/group/newest-cni/serial/DeployApp 0
344 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.08
345 TestStartStop/group/newest-cni/serial/Stop 7.69
346 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
347 TestStartStop/group/newest-cni/serial/SecondStart 44.38
348 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
349 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 62.94
350 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
351 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
352 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
353 TestStartStop/group/newest-cni/serial/Pause 4.06
354 TestNetworkPlugins/group/auto/KubeletFlags 0.21
355 TestNetworkPlugins/group/auto/NetCatPod 12.32
356 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
358 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
359 TestNetworkPlugins/group/kindnet/NetCatPod 10.28
360 TestNetworkPlugins/group/auto/DNS 0.2
361 TestNetworkPlugins/group/auto/Localhost 0.18
362 TestNetworkPlugins/group/auto/HairPin 0.18
363 TestNetworkPlugins/group/kindnet/DNS 0.22
364 TestNetworkPlugins/group/kindnet/Localhost 0.17
365 TestNetworkPlugins/group/kindnet/HairPin 0.18
366 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 10.01
367 TestNetworkPlugins/group/custom-flannel/Start 70.84
368 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
369 TestNetworkPlugins/group/enable-default-cni/Start 96.06
370 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
371 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.02
372 TestNetworkPlugins/group/flannel/Start 95.41
373 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.43
374 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.71
375 TestNetworkPlugins/group/custom-flannel/DNS 0.16
376 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
377 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
378 TestNetworkPlugins/group/bridge/Start 80.93
379 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.18
380 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.24
381 TestNetworkPlugins/group/flannel/ControllerPod 6.01
382 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
383 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
384 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
385 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
386 TestNetworkPlugins/group/flannel/NetCatPod 12.3
387 TestNetworkPlugins/group/flannel/DNS 0.16
388 TestNetworkPlugins/group/flannel/Localhost 0.15
389 TestNetworkPlugins/group/flannel/HairPin 0.15
391 TestISOImage/PersistentMounts//data 0.18
392 TestISOImage/PersistentMounts//var/lib/docker 0.2
393 TestISOImage/PersistentMounts//var/lib/cni 0.18
394 TestISOImage/PersistentMounts//var/lib/kubelet 0.18
395 TestISOImage/PersistentMounts//var/lib/minikube 0.19
396 TestISOImage/PersistentMounts//var/lib/toolbox 0.19
397 TestISOImage/PersistentMounts//var/lib/boot2docker 0.18
398 TestISOImage/eBPFSupport 0.18
399 TestNetworkPlugins/group/bridge/KubeletFlags 0.17
400 TestNetworkPlugins/group/bridge/NetCatPod 10.24
401 TestNetworkPlugins/group/bridge/DNS 0.14
402 TestNetworkPlugins/group/bridge/Localhost 0.12
403 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.28.0/json-events (6.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-462566 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-462566 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (6.240266336s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1101 09:26:42.420189  348518 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1101 09:26:42.420287  348518 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21832-344560/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-462566
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-462566: exit status 85 (78.234845ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-462566 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-462566 │ jenkins │ v1.37.0 │ 01 Nov 25 09:26 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:26:36
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:26:36.237150  348530 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:26:36.237283  348530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:26:36.237295  348530 out.go:374] Setting ErrFile to fd 2...
	I1101 09:26:36.237300  348530 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:26:36.237542  348530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-344560/.minikube/bin
	W1101 09:26:36.237737  348530 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21832-344560/.minikube/config/config.json: open /home/jenkins/minikube-integration/21832-344560/.minikube/config/config.json: no such file or directory
	I1101 09:26:36.238262  348530 out.go:368] Setting JSON to true
	I1101 09:26:36.239338  348530 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4144,"bootTime":1761985052,"procs":237,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:26:36.239403  348530 start.go:143] virtualization: kvm guest
	I1101 09:26:36.241616  348530 out.go:99] [download-only-462566] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:26:36.241768  348530 notify.go:221] Checking for updates...
	W1101 09:26:36.241800  348530 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21832-344560/.minikube/cache/preloaded-tarball: no such file or directory
	I1101 09:26:36.243809  348530 out.go:171] MINIKUBE_LOCATION=21832
	I1101 09:26:36.245209  348530 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:26:36.246719  348530 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21832-344560/kubeconfig
	I1101 09:26:36.248101  348530 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-344560/.minikube
	I1101 09:26:36.249406  348530 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1101 09:26:36.251781  348530 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1101 09:26:36.252037  348530 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:26:36.283601  348530 out.go:99] Using the kvm2 driver based on user configuration
	I1101 09:26:36.283661  348530 start.go:309] selected driver: kvm2
	I1101 09:26:36.283680  348530 start.go:930] validating driver "kvm2" against <nil>
	I1101 09:26:36.284049  348530 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 09:26:36.284528  348530 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1101 09:26:36.284688  348530 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1101 09:26:36.284735  348530 cni.go:84] Creating CNI manager for ""
	I1101 09:26:36.284792  348530 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 09:26:36.284801  348530 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1101 09:26:36.284845  348530 start.go:353] cluster config:
	{Name:download-only-462566 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-462566 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:26:36.285042  348530 iso.go:125] acquiring lock: {Name:mkc74493fbbc2007c645c4ed6349cf76e7fb2185 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:26:36.286658  348530 out.go:99] Downloading VM boot image ...
	I1101 09:26:36.286706  348530 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21832-344560/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso
	I1101 09:26:39.171842  348530 out.go:99] Starting "download-only-462566" primary control-plane node in "download-only-462566" cluster
	I1101 09:26:39.171884  348530 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 09:26:39.204467  348530 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1101 09:26:39.204535  348530 cache.go:59] Caching tarball of preloaded images
	I1101 09:26:39.204733  348530 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 09:26:39.206354  348530 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1101 09:26:39.206379  348530 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1101 09:26:39.235287  348530 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1101 09:26:39.235420  348530 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21832-344560/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-462566 host does not exist
	  To start a cluster, run: "minikube start -p download-only-462566"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-462566
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (3.81s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-662663 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-662663 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (3.806695382s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (3.81s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1101 09:26:46.636700  348518 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1101 09:26:46.636742  348518 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21832-344560/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-662663
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-662663: exit status 85 (80.048447ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-462566 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-462566 │ jenkins │ v1.37.0 │ 01 Nov 25 09:26 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 01 Nov 25 09:26 UTC │ 01 Nov 25 09:26 UTC │
	│ delete  │ -p download-only-462566                                                                                                                                                 │ download-only-462566 │ jenkins │ v1.37.0 │ 01 Nov 25 09:26 UTC │ 01 Nov 25 09:26 UTC │
	│ start   │ -o=json --download-only -p download-only-662663 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-662663 │ jenkins │ v1.37.0 │ 01 Nov 25 09:26 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:26:42
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:26:42.886846  348710 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:26:42.887173  348710 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:26:42.887183  348710 out.go:374] Setting ErrFile to fd 2...
	I1101 09:26:42.887187  348710 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:26:42.887411  348710 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-344560/.minikube/bin
	I1101 09:26:42.887896  348710 out.go:368] Setting JSON to true
	I1101 09:26:42.888914  348710 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4151,"bootTime":1761985052,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:26:42.889020  348710 start.go:143] virtualization: kvm guest
	I1101 09:26:42.890741  348710 out.go:99] [download-only-662663] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:26:42.890973  348710 notify.go:221] Checking for updates...
	I1101 09:26:42.892285  348710 out.go:171] MINIKUBE_LOCATION=21832
	I1101 09:26:42.893676  348710 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:26:42.894938  348710 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21832-344560/kubeconfig
	I1101 09:26:42.896172  348710 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-344560/.minikube
	I1101 09:26:42.897554  348710 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-662663 host does not exist
	  To start a cluster, run: "minikube start -p download-only-662663"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-662663
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestBinaryMirror (0.67s)

                                                
                                                
=== RUN   TestBinaryMirror
I1101 09:26:47.372930  348518 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-267138 --alsologtostderr --binary-mirror http://127.0.0.1:35611 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-267138" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-267138
--- PASS: TestBinaryMirror (0.67s)

                                                
                                    
x
+
TestOffline (89.31s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-932725 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-932725 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m28.220343594s)
helpers_test.go:175: Cleaning up "offline-crio-932725" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-932725
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-932725: (1.089441325s)
--- PASS: TestOffline (89.31s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-610936
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-610936: exit status 85 (67.785783ms)

                                                
                                                
-- stdout --
	* Profile "addons-610936" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-610936"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-610936
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-610936: exit status 85 (67.302001ms)

                                                
                                                
-- stdout --
	* Profile "addons-610936" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-610936"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (162.07s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-610936 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-610936 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m42.072019509s)
--- PASS: TestAddons/Setup (162.07s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-610936 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-610936 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.56s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-610936 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-610936 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [85f633d6-3539-4443-8d47-46b81caf92be] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [85f633d6-3539-4443-8d47-46b81caf92be] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004292373s
addons_test.go:694: (dbg) Run:  kubectl --context addons-610936 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-610936 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-610936 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.56s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 10.084213ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-zk6f9" [8ca1aaec-2bd9-4d71-8886-79afedd32769] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005085292s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-p6swb" [bb847846-a739-4165-9043-1a8601f04bd7] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004132496s
addons_test.go:392: (dbg) Run:  kubectl --context addons-610936 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-610936 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-610936 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.764075547s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-610936 ip
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-610936 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.85s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.77s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 10.059481ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-610936
addons_test.go:332: (dbg) Run:  kubectl --context addons-610936 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-610936 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.77s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.38s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-8zz4q" [b2c07026-df4c-4c8f-a77d-b41864429b49] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00488366s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-610936 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.38s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.11s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 9.785131ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-br7l2" [04c85380-ef98-4ac1-bf3a-5609222c5b88] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.059222092s
addons_test.go:463: (dbg) Run:  kubectl --context addons-610936 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-610936 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.11s)

                                                
                                    
x
+
TestAddons/parallel/CSI (68.5s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1101 09:30:01.078442  348518 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1101 09:30:01.085015  348518 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1101 09:30:01.085047  348518 kapi.go:107] duration metric: took 6.62753ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 6.641811ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-610936 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610936 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610936 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-610936 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [d1254e9d-6c33-43c3-8537-bcb713219dd6] Pending
helpers_test.go:352: "task-pv-pod" [d1254e9d-6c33-43c3-8537-bcb713219dd6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [d1254e9d-6c33-43c3-8537-bcb713219dd6] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.005036242s
addons_test.go:572: (dbg) Run:  kubectl --context addons-610936 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-610936 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-610936 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-610936 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-610936 delete pod task-pv-pod: (1.339828878s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-610936 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-610936 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610936 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610936 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610936 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610936 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610936 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610936 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610936 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610936 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610936 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610936 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610936 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610936 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610936 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610936 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610936 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610936 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-610936 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [cce079b1-89eb-446d-90ce-94ed13e1c046] Pending
helpers_test.go:352: "task-pv-pod-restore" [cce079b1-89eb-446d-90ce-94ed13e1c046] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [cce079b1-89eb-446d-90ce-94ed13e1c046] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004366351s
addons_test.go:614: (dbg) Run:  kubectl --context addons-610936 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-610936 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-610936 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-610936 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-610936 addons disable volumesnapshots --alsologtostderr -v=1: (1.016091691s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-610936 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-610936 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.087024428s)
--- PASS: TestAddons/parallel/CSI (68.50s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (21.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-610936 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-fzmw7" [84d448b0-6a5c-4434-9d53-5e428406e44b] Pending
helpers_test.go:352: "headlamp-6945c6f4d-fzmw7" [84d448b0-6a5c-4434-9d53-5e428406e44b] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-fzmw7" [84d448b0-6a5c-4434-9d53-5e428406e44b] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.003757857s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-610936 addons disable headlamp --alsologtostderr -v=1
2025/11/01 09:30:04 [DEBUG] GET http://192.168.39.81:5000
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-610936 addons disable headlamp --alsologtostderr -v=1: (6.059616334s)
--- PASS: TestAddons/parallel/Headlamp (21.98s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (7.05s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-6f9fcf858b-rgnq9" [f1cbbbd7-4a00-47c5-8517-312707ebd5c1] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.005506401s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-610936 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-610936 addons disable cloud-spanner --alsologtostderr -v=1: (1.031261746s)
--- PASS: TestAddons/parallel/CloudSpanner (7.05s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.02s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-610936 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-610936 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610936 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610936 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610936 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610936 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610936 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610936 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610936 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [fbc3427f-a820-4c6d-9ca7-dee5b4ee7215] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [fbc3427f-a820-4c6d-9ca7-dee5b4ee7215] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [fbc3427f-a820-4c6d-9ca7-dee5b4ee7215] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.005221336s
addons_test.go:967: (dbg) Run:  kubectl --context addons-610936 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-610936 ssh "cat /opt/local-path-provisioner/pvc-479a1c05-a807-4c11-a5ef-bb253fe0f186_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-610936 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-610936 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-610936 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-610936 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.142222753s)
--- PASS: TestAddons/parallel/LocalPath (56.02s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.95s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-668jz" [8afeb20e-4679-4c6a-b8aa-615540852043] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004477394s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-610936 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.95s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.51s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-lgdfn" [ff220085-be32-4cce-9bdf-149d16e83b20] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004834074s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-610936 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-610936 addons disable yakd --alsologtostderr -v=1: (6.504819918s)
--- PASS: TestAddons/parallel/Yakd (12.51s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (79.29s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-610936
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-610936: (1m19.070939978s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-610936
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-610936
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-610936
--- PASS: TestAddons/StoppedEnableDisable (79.29s)

                                                
                                    
x
+
TestCertOptions (93.85s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-842807 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E1101 10:33:10.157078  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/functional-165244/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-842807 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m32.560679223s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-842807 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-842807 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-842807 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-842807" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-842807
--- PASS: TestCertOptions (93.85s)

                                                
                                    
x
+
TestCertExpiration (307.14s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-383589 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-383589 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m24.140250021s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-383589 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-383589 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (42.099269934s)
helpers_test.go:175: Cleaning up "cert-expiration-383589" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-383589
--- PASS: TestCertExpiration (307.14s)

                                                
                                    
x
+
TestForceSystemdFlag (74.43s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-706270 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-706270 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m13.290945761s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-706270 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-706270" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-706270
--- PASS: TestForceSystemdFlag (74.43s)

                                                
                                    
x
+
TestForceSystemdEnv (71.65s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-112765 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-112765 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m10.68391961s)
helpers_test.go:175: Cleaning up "force-systemd-env-112765" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-112765
--- PASS: TestForceSystemdEnv (71.65s)

                                                
                                    
x
+
TestErrorSpam/setup (39.72s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-305200 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-305200 --driver=kvm2  --container-runtime=crio
E1101 09:34:30.864042  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:34:30.870497  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:34:30.881977  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:34:30.903460  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:34:30.944991  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:34:31.026519  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:34:31.188106  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:34:31.509943  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:34:32.152122  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:34:33.433893  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:34:35.996247  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:34:41.117626  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-305200 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-305200 --driver=kvm2  --container-runtime=crio: (39.722796793s)
--- PASS: TestErrorSpam/setup (39.72s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-305200 --log_dir /tmp/nospam-305200 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-305200 --log_dir /tmp/nospam-305200 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-305200 --log_dir /tmp/nospam-305200 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-305200 --log_dir /tmp/nospam-305200 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-305200 --log_dir /tmp/nospam-305200 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-305200 --log_dir /tmp/nospam-305200 status
--- PASS: TestErrorSpam/status (0.71s)

                                                
                                    
x
+
TestErrorSpam/pause (1.62s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-305200 --log_dir /tmp/nospam-305200 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-305200 --log_dir /tmp/nospam-305200 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-305200 --log_dir /tmp/nospam-305200 pause
--- PASS: TestErrorSpam/pause (1.62s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.95s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-305200 --log_dir /tmp/nospam-305200 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-305200 --log_dir /tmp/nospam-305200 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-305200 --log_dir /tmp/nospam-305200 unpause
--- PASS: TestErrorSpam/unpause (1.95s)

                                                
                                    
x
+
TestErrorSpam/stop (4.71s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-305200 --log_dir /tmp/nospam-305200 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-305200 --log_dir /tmp/nospam-305200 stop: (2.080673905s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-305200 --log_dir /tmp/nospam-305200 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-305200 --log_dir /tmp/nospam-305200 stop: (1.318379969s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-305200 --log_dir /tmp/nospam-305200 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-305200 --log_dir /tmp/nospam-305200 stop: (1.306424293s)
--- PASS: TestErrorSpam/stop (4.71s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21832-344560/.minikube/files/etc/test/nested/copy/348518/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (84.49s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-165244 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1101 09:35:11.841348  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:35:52.804714  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-165244 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m24.490604716s)
--- PASS: TestFunctional/serial/StartWithProxy (84.49s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (62.17s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1101 09:36:15.966251  348518 config.go:182] Loaded profile config "functional-165244": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-165244 --alsologtostderr -v=8
E1101 09:37:14.726124  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-165244 --alsologtostderr -v=8: (1m2.17365124s)
functional_test.go:678: soft start took 1m2.17461982s for "functional-165244" cluster.
I1101 09:37:18.140278  348518 config.go:182] Loaded profile config "functional-165244": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (62.17s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-165244 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-165244 cache add registry.k8s.io/pause:3.1: (1.107812572s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-165244 cache add registry.k8s.io/pause:3.3: (1.222025831s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-165244 cache add registry.k8s.io/pause:latest: (1.230119625s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-165244 /tmp/TestFunctionalserialCacheCmdcacheadd_local1808717932/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 cache add minikube-local-cache-test:functional-165244
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-165244 cache add minikube-local-cache-test:functional-165244: (1.177458856s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 cache delete minikube-local-cache-test:functional-165244
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-165244
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-165244 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (195.959282ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-165244 cache reload: (1.034629493s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 kubectl -- --context functional-165244 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-165244 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.79s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-165244 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-165244 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.791265129s)
functional_test.go:776: restart took 35.791406208s for "functional-165244" cluster.
I1101 09:38:01.578260  348518 config.go:182] Loaded profile config "functional-165244": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (35.79s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-165244 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-165244 logs: (1.656551359s)
--- PASS: TestFunctional/serial/LogsCmd (1.66s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.62s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 logs --file /tmp/TestFunctionalserialLogsFileCmd4275770636/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-165244 logs --file /tmp/TestFunctionalserialLogsFileCmd4275770636/001/logs.txt: (1.615032147s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.62s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.54s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-165244 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-165244
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-165244: exit status 115 (259.515215ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.117:31056 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-165244 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-165244 delete -f testdata/invalidsvc.yaml: (1.074293381s)
--- PASS: TestFunctional/serial/InvalidService (4.54s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-165244 config get cpus: exit status 14 (73.868615ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-165244 config get cpus: exit status 14 (76.852563ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-165244 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-165244 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 354772: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.41s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-165244 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-165244 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (122.787559ms)

                                                
                                                
-- stdout --
	* [functional-165244] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21832
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21832-344560/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-344560/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:38:35.698785  354692 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:38:35.699061  354692 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:38:35.699072  354692 out.go:374] Setting ErrFile to fd 2...
	I1101 09:38:35.699076  354692 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:38:35.699328  354692 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-344560/.minikube/bin
	I1101 09:38:35.699826  354692 out.go:368] Setting JSON to false
	I1101 09:38:35.700783  354692 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4864,"bootTime":1761985052,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:38:35.700897  354692 start.go:143] virtualization: kvm guest
	I1101 09:38:35.702958  354692 out.go:179] * [functional-165244] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:38:35.704545  354692 notify.go:221] Checking for updates...
	I1101 09:38:35.704565  354692 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 09:38:35.706062  354692 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:38:35.707740  354692 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-344560/kubeconfig
	I1101 09:38:35.709381  354692 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-344560/.minikube
	I1101 09:38:35.710779  354692 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:38:35.712131  354692 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:38:35.713968  354692 config.go:182] Loaded profile config "functional-165244": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:38:35.714424  354692 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:38:35.748350  354692 out.go:179] * Using the kvm2 driver based on existing profile
	I1101 09:38:35.749975  354692 start.go:309] selected driver: kvm2
	I1101 09:38:35.749998  354692 start.go:930] validating driver "kvm2" against &{Name:functional-165244 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-165244 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.117 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:38:35.750122  354692 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:38:35.752356  354692 out.go:203] 
	W1101 09:38:35.753996  354692 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1101 09:38:35.755410  354692 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-165244 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-165244 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-165244 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (127.404724ms)

                                                
                                                
-- stdout --
	* [functional-165244] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21832
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21832-344560/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-344560/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:38:35.575080  354676 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:38:35.575234  354676 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:38:35.575246  354676 out.go:374] Setting ErrFile to fd 2...
	I1101 09:38:35.575253  354676 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:38:35.575582  354676 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-344560/.minikube/bin
	I1101 09:38:35.576096  354676 out.go:368] Setting JSON to false
	I1101 09:38:35.577068  354676 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4864,"bootTime":1761985052,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:38:35.577160  354676 start.go:143] virtualization: kvm guest
	I1101 09:38:35.579341  354676 out.go:179] * [functional-165244] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1101 09:38:35.581091  354676 notify.go:221] Checking for updates...
	I1101 09:38:35.581105  354676 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 09:38:35.582856  354676 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:38:35.584436  354676 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-344560/kubeconfig
	I1101 09:38:35.586050  354676 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-344560/.minikube
	I1101 09:38:35.587902  354676 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:38:35.589374  354676 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:38:35.591166  354676 config.go:182] Loaded profile config "functional-165244": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:38:35.591637  354676 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:38:35.625316  354676 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1101 09:38:35.626507  354676 start.go:309] selected driver: kvm2
	I1101 09:38:35.626524  354676 start.go:930] validating driver "kvm2" against &{Name:functional-165244 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-165244 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.117 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:38:35.626663  354676 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:38:35.628963  354676 out.go:203] 
	W1101 09:38:35.630422  354676 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1101 09:38:35.631875  354676 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (20.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-165244 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-165244 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-4r7xh" [4ace3f51-0dc1-4472-a385-5288000832f3] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-4r7xh" [4ace3f51-0dc1-4472-a385-5288000832f3] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 20.005472821s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.117:31755
functional_test.go:1680: http://192.168.39.117:31755: success! body:
Request served by hello-node-connect-7d85dfc575-4r7xh

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.117:31755
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (20.57s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 ssh -n functional-165244 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 cp functional-165244:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1586579294/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 ssh -n functional-165244 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 ssh -n functional-165244 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-165244 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-4w988" [724ab0ee-76a4-4632-b6d4-b0c41df4b5b4] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-4w988" [724ab0ee-76a4-4632-b6d4-b0c41df4b5b4] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.005148764s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-165244 exec mysql-5bb876957f-4w988 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-165244 exec mysql-5bb876957f-4w988 -- mysql -ppassword -e "show databases;": exit status 1 (160.735343ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1101 09:38:32.322437  348518 retry.go:31] will retry after 642.671222ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-165244 exec mysql-5bb876957f-4w988 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-165244 exec mysql-5bb876957f-4w988 -- mysql -ppassword -e "show databases;": exit status 1 (211.055951ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1812: (dbg) Run:  kubectl --context functional-165244 exec mysql-5bb876957f-4w988 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.77s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/348518/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 ssh "sudo cat /etc/test/nested/copy/348518/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/348518.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 ssh "sudo cat /etc/ssl/certs/348518.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/348518.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 ssh "sudo cat /usr/share/ca-certificates/348518.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3485182.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 ssh "sudo cat /etc/ssl/certs/3485182.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3485182.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 ssh "sudo cat /usr/share/ca-certificates/3485182.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-165244 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-165244 ssh "sudo systemctl is-active docker": exit status 1 (210.191279ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-165244 ssh "sudo systemctl is-active containerd": exit status 1 (187.403796ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 image ls --format short --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-165244 image ls --format short --alsologtostderr: (1.09333465s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-165244 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-165244
localhost/kicbase/echo-server:functional-165244
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-165244 image ls --format short --alsologtostderr:
I1101 09:38:43.203060  355107 out.go:360] Setting OutFile to fd 1 ...
I1101 09:38:43.203389  355107 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:38:43.203401  355107 out.go:374] Setting ErrFile to fd 2...
I1101 09:38:43.203406  355107 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:38:43.203598  355107 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-344560/.minikube/bin
I1101 09:38:43.204260  355107 config.go:182] Loaded profile config "functional-165244": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:38:43.204362  355107 config.go:182] Loaded profile config "functional-165244": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:38:43.206734  355107 ssh_runner.go:195] Run: systemctl --version
I1101 09:38:43.209041  355107 main.go:143] libmachine: domain functional-165244 has defined MAC address 52:54:00:07:86:a8 in network mk-functional-165244
I1101 09:38:43.209524  355107 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:07:86:a8", ip: ""} in network mk-functional-165244: {Iface:virbr1 ExpiryTime:2025-11-01 10:35:08 +0000 UTC Type:0 Mac:52:54:00:07:86:a8 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:functional-165244 Clientid:01:52:54:00:07:86:a8}
I1101 09:38:43.209559  355107 main.go:143] libmachine: domain functional-165244 has defined IP address 192.168.39.117 and MAC address 52:54:00:07:86:a8 in network mk-functional-165244
I1101 09:38:43.209748  355107 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/functional-165244/id_rsa Username:docker}
I1101 09:38:43.316017  355107 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-165244 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/minikube-local-cache-test     │ functional-165244  │ 4f4dce5367966 │ 3.33kB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.95MB │
│ localhost/kicbase/echo-server           │ functional-165244  │ 9056ab77afb8e │ 4.95MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ docker.io/library/nginx                 │ latest             │ 9d0e6f6199dcb │ 155MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-165244 image ls --format table --alsologtostderr:
I1101 09:38:44.613035  355164 out.go:360] Setting OutFile to fd 1 ...
I1101 09:38:44.613355  355164 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:38:44.613366  355164 out.go:374] Setting ErrFile to fd 2...
I1101 09:38:44.613371  355164 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:38:44.613558  355164 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-344560/.minikube/bin
I1101 09:38:44.614186  355164 config.go:182] Loaded profile config "functional-165244": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:38:44.614291  355164 config.go:182] Loaded profile config "functional-165244": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:38:44.616736  355164 ssh_runner.go:195] Run: systemctl --version
I1101 09:38:44.619456  355164 main.go:143] libmachine: domain functional-165244 has defined MAC address 52:54:00:07:86:a8 in network mk-functional-165244
I1101 09:38:44.619962  355164 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:07:86:a8", ip: ""} in network mk-functional-165244: {Iface:virbr1 ExpiryTime:2025-11-01 10:35:08 +0000 UTC Type:0 Mac:52:54:00:07:86:a8 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:functional-165244 Clientid:01:52:54:00:07:86:a8}
I1101 09:38:44.619997  355164 main.go:143] libmachine: domain functional-165244 has defined IP address 192.168.39.117 and MAC address 52:54:00:07:86:a8 in network mk-functional-165244
I1101 09:38:44.620165  355164 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/functional-165244/id_rsa Username:docker}
I1101 09:38:44.733414  355164 ssh_runner.go:195] Run: sudo crictl images --output json
2025/11/01 09:38:46 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-165244 image ls --format json --alsologtostderr:
[{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"r
epoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"9d0e6f6199dcb6e045dad103064601d730fcfaf8d1bd357d969fb70bd5b90dec","repoDigests":["docker.io/library/nginx@sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58","docker.io/library/nginx@sha256:f547e3d0d5d02f7009737b284abc87d808e4252b42dceea361811e9fc606287f"],"repoTags":["docker.io/library/nginx:latest"],"size":"155489797"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384
401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo
-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-165244"],"size":"4945246"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"si
ze":"247077"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"4f4dce5367966ae39b9d6998bb9a173b1fa507e1d92d3042bdaac456c53fa50b","repoDigests":["localhost/minikube-local-cache-test@sha256:b401101960b6b1ac13a3762ecd5b2d2f01c654b0920c98d9fd505fa1da964e78"],"repoTags":["localhost/minikube-local-cache-test:functional-165244"],
"size":"3330"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a76
2da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-165244 image ls --format json --alsologtostderr:
I1101 09:38:44.281555  355134 out.go:360] Setting OutFile to fd 1 ...
I1101 09:38:44.281934  355134 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:38:44.281948  355134 out.go:374] Setting ErrFile to fd 2...
I1101 09:38:44.281956  355134 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:38:44.282324  355134 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-344560/.minikube/bin
I1101 09:38:44.283194  355134 config.go:182] Loaded profile config "functional-165244": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:38:44.283349  355134 config.go:182] Loaded profile config "functional-165244": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:38:44.286251  355134 ssh_runner.go:195] Run: systemctl --version
I1101 09:38:44.289805  355134 main.go:143] libmachine: domain functional-165244 has defined MAC address 52:54:00:07:86:a8 in network mk-functional-165244
I1101 09:38:44.290236  355134 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:07:86:a8", ip: ""} in network mk-functional-165244: {Iface:virbr1 ExpiryTime:2025-11-01 10:35:08 +0000 UTC Type:0 Mac:52:54:00:07:86:a8 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:functional-165244 Clientid:01:52:54:00:07:86:a8}
I1101 09:38:44.290266  355134 main.go:143] libmachine: domain functional-165244 has defined IP address 192.168.39.117 and MAC address 52:54:00:07:86:a8 in network mk-functional-165244
I1101 09:38:44.290650  355134 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/functional-165244/id_rsa Username:docker}
I1101 09:38:44.410544  355134 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 image ls --format yaml --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-165244 image ls --format yaml --alsologtostderr: (1.015006429s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-165244 image ls --format yaml --alsologtostderr:
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-165244
size: "4945246"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 9d0e6f6199dcb6e045dad103064601d730fcfaf8d1bd357d969fb70bd5b90dec
repoDigests:
- docker.io/library/nginx@sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58
- docker.io/library/nginx@sha256:f547e3d0d5d02f7009737b284abc87d808e4252b42dceea361811e9fc606287f
repoTags:
- docker.io/library/nginx:latest
size: "155489797"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 4f4dce5367966ae39b9d6998bb9a173b1fa507e1d92d3042bdaac456c53fa50b
repoDigests:
- localhost/minikube-local-cache-test@sha256:b401101960b6b1ac13a3762ecd5b2d2f01c654b0920c98d9fd505fa1da964e78
repoTags:
- localhost/minikube-local-cache-test:functional-165244
size: "3330"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-165244 image ls --format yaml --alsologtostderr:
I1101 09:38:43.264358  355121 out.go:360] Setting OutFile to fd 1 ...
I1101 09:38:43.264669  355121 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:38:43.264680  355121 out.go:374] Setting ErrFile to fd 2...
I1101 09:38:43.264685  355121 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:38:43.265002  355121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-344560/.minikube/bin
I1101 09:38:43.265689  355121 config.go:182] Loaded profile config "functional-165244": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:38:43.265837  355121 config.go:182] Loaded profile config "functional-165244": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:38:43.268191  355121 ssh_runner.go:195] Run: systemctl --version
I1101 09:38:43.270678  355121 main.go:143] libmachine: domain functional-165244 has defined MAC address 52:54:00:07:86:a8 in network mk-functional-165244
I1101 09:38:43.271182  355121 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:07:86:a8", ip: ""} in network mk-functional-165244: {Iface:virbr1 ExpiryTime:2025-11-01 10:35:08 +0000 UTC Type:0 Mac:52:54:00:07:86:a8 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:functional-165244 Clientid:01:52:54:00:07:86:a8}
I1101 09:38:43.271211  355121 main.go:143] libmachine: domain functional-165244 has defined IP address 192.168.39.117 and MAC address 52:54:00:07:86:a8 in network mk-functional-165244
I1101 09:38:43.271356  355121 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/functional-165244/id_rsa Username:docker}
I1101 09:38:43.370941  355121 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-165244 ssh pgrep buildkitd: exit status 1 (230.868438ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 image build -t localhost/my-image:functional-165244 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-165244 image build -t localhost/my-image:functional-165244 testdata/build --alsologtostderr: (2.050733623s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-165244 image build -t localhost/my-image:functional-165244 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> e730af07ce4
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-165244
--> a02de5b3021
Successfully tagged localhost/my-image:functional-165244
a02de5b302152f6a1cdfca7905aa0ddd35b27349f55730ae8e6560801662273e
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-165244 image build -t localhost/my-image:functional-165244 testdata/build --alsologtostderr:
I1101 09:38:44.510351  355154 out.go:360] Setting OutFile to fd 1 ...
I1101 09:38:44.510561  355154 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:38:44.510577  355154 out.go:374] Setting ErrFile to fd 2...
I1101 09:38:44.510583  355154 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:38:44.510978  355154 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-344560/.minikube/bin
I1101 09:38:44.511949  355154 config.go:182] Loaded profile config "functional-165244": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:38:44.512903  355154 config.go:182] Loaded profile config "functional-165244": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:38:44.516119  355154 ssh_runner.go:195] Run: systemctl --version
I1101 09:38:44.519216  355154 main.go:143] libmachine: domain functional-165244 has defined MAC address 52:54:00:07:86:a8 in network mk-functional-165244
I1101 09:38:44.519741  355154 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:07:86:a8", ip: ""} in network mk-functional-165244: {Iface:virbr1 ExpiryTime:2025-11-01 10:35:08 +0000 UTC Type:0 Mac:52:54:00:07:86:a8 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:functional-165244 Clientid:01:52:54:00:07:86:a8}
I1101 09:38:44.519778  355154 main.go:143] libmachine: domain functional-165244 has defined IP address 192.168.39.117 and MAC address 52:54:00:07:86:a8 in network mk-functional-165244
I1101 09:38:44.519984  355154 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/functional-165244/id_rsa Username:docker}
I1101 09:38:44.629458  355154 build_images.go:162] Building image from path: /tmp/build.1487123199.tar
I1101 09:38:44.629544  355154 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1101 09:38:44.654121  355154 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1487123199.tar
I1101 09:38:44.667161  355154 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1487123199.tar: stat -c "%s %y" /var/lib/minikube/build/build.1487123199.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1487123199.tar': No such file or directory
I1101 09:38:44.667206  355154 ssh_runner.go:362] scp /tmp/build.1487123199.tar --> /var/lib/minikube/build/build.1487123199.tar (3072 bytes)
I1101 09:38:44.706998  355154 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1487123199
I1101 09:38:44.730672  355154 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1487123199 -xf /var/lib/minikube/build/build.1487123199.tar
I1101 09:38:44.754173  355154 crio.go:315] Building image: /var/lib/minikube/build/build.1487123199
I1101 09:38:44.754257  355154 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-165244 /var/lib/minikube/build/build.1487123199 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1101 09:38:46.450135  355154 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-165244 /var/lib/minikube/build/build.1487123199 --cgroup-manager=cgroupfs: (1.695846477s)
I1101 09:38:46.450216  355154 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1487123199
I1101 09:38:46.465790  355154 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1487123199.tar
I1101 09:38:46.481063  355154 build_images.go:218] Built localhost/my-image:functional-165244 from /tmp/build.1487123199.tar
I1101 09:38:46.481103  355154 build_images.go:134] succeeded building to: functional-165244
I1101 09:38:46.481108  355154 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 image ls
E1101 09:39:30.856341  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:39:58.567594  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:44:30.855660  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-165244
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 image load --daemon kicbase/echo-server:functional-165244 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-165244 image load --daemon kicbase/echo-server:functional-165244 --alsologtostderr: (1.384211287s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 image load --daemon kicbase/echo-server:functional-165244 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-165244
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 image load --daemon kicbase/echo-server:functional-165244 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (8.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 image save kicbase/echo-server:functional-165244 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:395: (dbg) Done: out/minikube-linux-amd64 -p functional-165244 image save kicbase/echo-server:functional-165244 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (8.103406138s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (8.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 image rm kicbase/echo-server:functional-165244 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-165244
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 image save --daemon kicbase/echo-server:functional-165244 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-165244
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-165244 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-165244 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-9crl9" [c995eea8-780c-4933-99fa-8674a74f2ac7] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-9crl9" [c995eea8-780c-4933-99fa-8674a74f2ac7] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.00730545s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "261.799451ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "68.766397ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "301.87425ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "95.923719ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-165244 /tmp/TestFunctionalparallelMountCmdany-port2806445802/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1761989913177064882" to /tmp/TestFunctionalparallelMountCmdany-port2806445802/001/created-by-test
I1101 09:38:33.177159  348518 retry.go:31] will retry after 1.379846035s: exit status 1
functional_test_mount_test.go:107: wrote "test-1761989913177064882" to /tmp/TestFunctionalparallelMountCmdany-port2806445802/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1761989913177064882" to /tmp/TestFunctionalparallelMountCmdany-port2806445802/001/test-1761989913177064882
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-165244 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (157.842382ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1101 09:38:33.335244  348518 retry.go:31] will retry after 437.528576ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  1 09:38 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  1 09:38 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  1 09:38 test-1761989913177064882
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 ssh cat /mount-9p/test-1761989913177064882
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-165244 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [9ccd6612-00eb-48a8-8ffe-ff3fcd043624] Pending
helpers_test.go:352: "busybox-mount" [9ccd6612-00eb-48a8-8ffe-ff3fcd043624] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [9ccd6612-00eb-48a8-8ffe-ff3fcd043624] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [9ccd6612-00eb-48a8-8ffe-ff3fcd043624] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.005970606s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-165244 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-165244 /tmp/TestFunctionalparallelMountCmdany-port2806445802/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-165244 service list: (1.305909497s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-165244 service list -o json: (1.254368922s)
functional_test.go:1504: Took "1.254489229s" to run "out/minikube-linux-amd64 -p functional-165244 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-165244 /tmp/TestFunctionalparallelMountCmdspecific-port2025363609/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-165244 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (202.678823ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1101 09:38:40.405805  348518 retry.go:31] will retry after 585.357859ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-165244 /tmp/TestFunctionalparallelMountCmdspecific-port2025363609/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-165244 ssh "sudo umount -f /mount-9p": exit status 1 (187.96721ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-165244 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-165244 /tmp/TestFunctionalparallelMountCmdspecific-port2025363609/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.117:31213
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.117:31213
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-165244 /tmp/TestFunctionalparallelMountCmdVerifyCleanup529243428/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-165244 /tmp/TestFunctionalparallelMountCmdVerifyCleanup529243428/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-165244 /tmp/TestFunctionalparallelMountCmdVerifyCleanup529243428/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-165244 ssh "findmnt -T" /mount1: exit status 1 (214.686974ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1101 09:38:42.025149  348518 retry.go:31] will retry after 357.305338ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-165244 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-165244 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-165244 /tmp/TestFunctionalparallelMountCmdVerifyCleanup529243428/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-165244 /tmp/TestFunctionalparallelMountCmdVerifyCleanup529243428/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-165244 /tmp/TestFunctionalparallelMountCmdVerifyCleanup529243428/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.31s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-165244
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-165244
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-165244
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (208.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-967493 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m27.501894936s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 status --alsologtostderr -v 5
E1101 09:48:10.157035  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/functional-165244/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:48:10.163576  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/functional-165244/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:48:10.175282  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/functional-165244/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:48:10.196969  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/functional-165244/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:48:10.238266  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/functional-165244/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:48:10.320389  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/functional-165244/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/StartCluster (208.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
E1101 09:48:10.482350  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/functional-165244/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 kubectl -- rollout status deployment/busybox
E1101 09:48:10.804154  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/functional-165244/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:48:11.446215  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/functional-165244/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:48:12.728018  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/functional-165244/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-967493 kubectl -- rollout status deployment/busybox: (3.015024276s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 kubectl -- exec busybox-7b57f96db7-25hjt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 kubectl -- exec busybox-7b57f96db7-nfrgt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 kubectl -- exec busybox-7b57f96db7-tzmff -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 kubectl -- exec busybox-7b57f96db7-25hjt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 kubectl -- exec busybox-7b57f96db7-nfrgt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 kubectl -- exec busybox-7b57f96db7-tzmff -- nslookup kubernetes.default
E1101 09:48:15.290258  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/functional-165244/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 kubectl -- exec busybox-7b57f96db7-25hjt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 kubectl -- exec busybox-7b57f96db7-nfrgt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 kubectl -- exec busybox-7b57f96db7-tzmff -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 kubectl -- exec busybox-7b57f96db7-25hjt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 kubectl -- exec busybox-7b57f96db7-25hjt -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 kubectl -- exec busybox-7b57f96db7-nfrgt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 kubectl -- exec busybox-7b57f96db7-nfrgt -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 kubectl -- exec busybox-7b57f96db7-tzmff -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 kubectl -- exec busybox-7b57f96db7-tzmff -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (75.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 node add --alsologtostderr -v 5
E1101 09:48:20.411795  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/functional-165244/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:48:30.653377  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/functional-165244/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:48:51.134823  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/functional-165244/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:49:30.856267  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-967493 node add --alsologtostderr -v 5: (1m14.351649019s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 status --alsologtostderr -v 5
E1101 09:49:32.096750  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/functional-165244/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (75.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-967493 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (11.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 cp testdata/cp-test.txt ha-967493:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 ssh -n ha-967493 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 cp ha-967493:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2019135101/001/cp-test_ha-967493.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 ssh -n ha-967493 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 cp ha-967493:/home/docker/cp-test.txt ha-967493-m02:/home/docker/cp-test_ha-967493_ha-967493-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 ssh -n ha-967493 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 ssh -n ha-967493-m02 "sudo cat /home/docker/cp-test_ha-967493_ha-967493-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 cp ha-967493:/home/docker/cp-test.txt ha-967493-m03:/home/docker/cp-test_ha-967493_ha-967493-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 ssh -n ha-967493 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 ssh -n ha-967493-m03 "sudo cat /home/docker/cp-test_ha-967493_ha-967493-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 cp ha-967493:/home/docker/cp-test.txt ha-967493-m04:/home/docker/cp-test_ha-967493_ha-967493-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 ssh -n ha-967493 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 ssh -n ha-967493-m04 "sudo cat /home/docker/cp-test_ha-967493_ha-967493-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 cp testdata/cp-test.txt ha-967493-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 ssh -n ha-967493-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 cp ha-967493-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2019135101/001/cp-test_ha-967493-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 ssh -n ha-967493-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 cp ha-967493-m02:/home/docker/cp-test.txt ha-967493:/home/docker/cp-test_ha-967493-m02_ha-967493.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 ssh -n ha-967493-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 ssh -n ha-967493 "sudo cat /home/docker/cp-test_ha-967493-m02_ha-967493.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 cp ha-967493-m02:/home/docker/cp-test.txt ha-967493-m03:/home/docker/cp-test_ha-967493-m02_ha-967493-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 ssh -n ha-967493-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 ssh -n ha-967493-m03 "sudo cat /home/docker/cp-test_ha-967493-m02_ha-967493-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 cp ha-967493-m02:/home/docker/cp-test.txt ha-967493-m04:/home/docker/cp-test_ha-967493-m02_ha-967493-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 ssh -n ha-967493-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 ssh -n ha-967493-m04 "sudo cat /home/docker/cp-test_ha-967493-m02_ha-967493-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 cp testdata/cp-test.txt ha-967493-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 ssh -n ha-967493-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 cp ha-967493-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2019135101/001/cp-test_ha-967493-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 ssh -n ha-967493-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 cp ha-967493-m03:/home/docker/cp-test.txt ha-967493:/home/docker/cp-test_ha-967493-m03_ha-967493.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 ssh -n ha-967493-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 ssh -n ha-967493 "sudo cat /home/docker/cp-test_ha-967493-m03_ha-967493.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 cp ha-967493-m03:/home/docker/cp-test.txt ha-967493-m02:/home/docker/cp-test_ha-967493-m03_ha-967493-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 ssh -n ha-967493-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 ssh -n ha-967493-m02 "sudo cat /home/docker/cp-test_ha-967493-m03_ha-967493-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 cp ha-967493-m03:/home/docker/cp-test.txt ha-967493-m04:/home/docker/cp-test_ha-967493-m03_ha-967493-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 ssh -n ha-967493-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 ssh -n ha-967493-m04 "sudo cat /home/docker/cp-test_ha-967493-m03_ha-967493-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 cp testdata/cp-test.txt ha-967493-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 ssh -n ha-967493-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 cp ha-967493-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2019135101/001/cp-test_ha-967493-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 ssh -n ha-967493-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 cp ha-967493-m04:/home/docker/cp-test.txt ha-967493:/home/docker/cp-test_ha-967493-m04_ha-967493.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 ssh -n ha-967493-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 ssh -n ha-967493 "sudo cat /home/docker/cp-test_ha-967493-m04_ha-967493.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 cp ha-967493-m04:/home/docker/cp-test.txt ha-967493-m02:/home/docker/cp-test_ha-967493-m04_ha-967493-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 ssh -n ha-967493-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 ssh -n ha-967493-m02 "sudo cat /home/docker/cp-test_ha-967493-m04_ha-967493-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 cp ha-967493-m04:/home/docker/cp-test.txt ha-967493-m03:/home/docker/cp-test_ha-967493-m04_ha-967493-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 ssh -n ha-967493-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 ssh -n ha-967493-m03 "sudo cat /home/docker/cp-test_ha-967493-m04_ha-967493-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (11.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (88.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 node stop m02 --alsologtostderr -v 5
E1101 09:50:53.929050  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:50:54.018664  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/functional-165244/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-967493 node stop m02 --alsologtostderr -v 5: (1m27.871613909s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-967493 status --alsologtostderr -v 5: exit status 7 (552.527347ms)

                                                
                                                
-- stdout --
	ha-967493
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-967493-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-967493-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-967493-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:51:12.720844  359549 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:51:12.721114  359549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:51:12.721125  359549 out.go:374] Setting ErrFile to fd 2...
	I1101 09:51:12.721129  359549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:51:12.721351  359549 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-344560/.minikube/bin
	I1101 09:51:12.721537  359549 out.go:368] Setting JSON to false
	I1101 09:51:12.721566  359549 mustload.go:66] Loading cluster: ha-967493
	I1101 09:51:12.721765  359549 notify.go:221] Checking for updates...
	I1101 09:51:12.721962  359549 config.go:182] Loaded profile config "ha-967493": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:51:12.721978  359549 status.go:174] checking status of ha-967493 ...
	I1101 09:51:12.724213  359549 status.go:371] ha-967493 host status = "Running" (err=<nil>)
	I1101 09:51:12.724233  359549 host.go:66] Checking if "ha-967493" exists ...
	I1101 09:51:12.727063  359549 main.go:143] libmachine: domain ha-967493 has defined MAC address 52:54:00:a5:92:b4 in network mk-ha-967493
	I1101 09:51:12.727582  359549 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a5:92:b4", ip: ""} in network mk-ha-967493: {Iface:virbr1 ExpiryTime:2025-11-01 10:44:58 +0000 UTC Type:0 Mac:52:54:00:a5:92:b4 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-967493 Clientid:01:52:54:00:a5:92:b4}
	I1101 09:51:12.727612  359549 main.go:143] libmachine: domain ha-967493 has defined IP address 192.168.39.195 and MAC address 52:54:00:a5:92:b4 in network mk-ha-967493
	I1101 09:51:12.727813  359549 host.go:66] Checking if "ha-967493" exists ...
	I1101 09:51:12.728110  359549 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:51:12.730728  359549 main.go:143] libmachine: domain ha-967493 has defined MAC address 52:54:00:a5:92:b4 in network mk-ha-967493
	I1101 09:51:12.731286  359549 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a5:92:b4", ip: ""} in network mk-ha-967493: {Iface:virbr1 ExpiryTime:2025-11-01 10:44:58 +0000 UTC Type:0 Mac:52:54:00:a5:92:b4 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:ha-967493 Clientid:01:52:54:00:a5:92:b4}
	I1101 09:51:12.731328  359549 main.go:143] libmachine: domain ha-967493 has defined IP address 192.168.39.195 and MAC address 52:54:00:a5:92:b4 in network mk-ha-967493
	I1101 09:51:12.731493  359549 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/ha-967493/id_rsa Username:docker}
	I1101 09:51:12.816669  359549 ssh_runner.go:195] Run: systemctl --version
	I1101 09:51:12.829193  359549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:51:12.850376  359549 kubeconfig.go:125] found "ha-967493" server: "https://192.168.39.254:8443"
	I1101 09:51:12.850415  359549 api_server.go:166] Checking apiserver status ...
	I1101 09:51:12.850465  359549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:51:12.876962  359549 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1377/cgroup
	W1101 09:51:12.890663  359549 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1377/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:51:12.890718  359549 ssh_runner.go:195] Run: ls
	I1101 09:51:12.896625  359549 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1101 09:51:12.902102  359549 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1101 09:51:12.902139  359549 status.go:463] ha-967493 apiserver status = Running (err=<nil>)
	I1101 09:51:12.902158  359549 status.go:176] ha-967493 status: &{Name:ha-967493 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 09:51:12.902184  359549 status.go:174] checking status of ha-967493-m02 ...
	I1101 09:51:12.904086  359549 status.go:371] ha-967493-m02 host status = "Stopped" (err=<nil>)
	I1101 09:51:12.904103  359549 status.go:384] host is not running, skipping remaining checks
	I1101 09:51:12.904108  359549 status.go:176] ha-967493-m02 status: &{Name:ha-967493-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 09:51:12.904122  359549 status.go:174] checking status of ha-967493-m03 ...
	I1101 09:51:12.905477  359549 status.go:371] ha-967493-m03 host status = "Running" (err=<nil>)
	I1101 09:51:12.905493  359549 host.go:66] Checking if "ha-967493-m03" exists ...
	I1101 09:51:12.908114  359549 main.go:143] libmachine: domain ha-967493-m03 has defined MAC address 52:54:00:9b:7e:45 in network mk-ha-967493
	I1101 09:51:12.908577  359549 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:7e:45", ip: ""} in network mk-ha-967493: {Iface:virbr1 ExpiryTime:2025-11-01 10:47:01 +0000 UTC Type:0 Mac:52:54:00:9b:7e:45 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-967493-m03 Clientid:01:52:54:00:9b:7e:45}
	I1101 09:51:12.908602  359549 main.go:143] libmachine: domain ha-967493-m03 has defined IP address 192.168.39.142 and MAC address 52:54:00:9b:7e:45 in network mk-ha-967493
	I1101 09:51:12.908773  359549 host.go:66] Checking if "ha-967493-m03" exists ...
	I1101 09:51:12.908982  359549 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:51:12.911595  359549 main.go:143] libmachine: domain ha-967493-m03 has defined MAC address 52:54:00:9b:7e:45 in network mk-ha-967493
	I1101 09:51:12.911936  359549 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:7e:45", ip: ""} in network mk-ha-967493: {Iface:virbr1 ExpiryTime:2025-11-01 10:47:01 +0000 UTC Type:0 Mac:52:54:00:9b:7e:45 Iaid: IPaddr:192.168.39.142 Prefix:24 Hostname:ha-967493-m03 Clientid:01:52:54:00:9b:7e:45}
	I1101 09:51:12.911976  359549 main.go:143] libmachine: domain ha-967493-m03 has defined IP address 192.168.39.142 and MAC address 52:54:00:9b:7e:45 in network mk-ha-967493
	I1101 09:51:12.912110  359549 sshutil.go:53] new ssh client: &{IP:192.168.39.142 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/ha-967493-m03/id_rsa Username:docker}
	I1101 09:51:13.003669  359549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:51:13.026552  359549 kubeconfig.go:125] found "ha-967493" server: "https://192.168.39.254:8443"
	I1101 09:51:13.026586  359549 api_server.go:166] Checking apiserver status ...
	I1101 09:51:13.026626  359549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:51:13.055241  359549 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1791/cgroup
	W1101 09:51:13.068730  359549 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1791/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:51:13.068821  359549 ssh_runner.go:195] Run: ls
	I1101 09:51:13.074562  359549 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1101 09:51:13.080328  359549 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1101 09:51:13.080365  359549 status.go:463] ha-967493-m03 apiserver status = Running (err=<nil>)
	I1101 09:51:13.080379  359549 status.go:176] ha-967493-m03 status: &{Name:ha-967493-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 09:51:13.080406  359549 status.go:174] checking status of ha-967493-m04 ...
	I1101 09:51:13.082083  359549 status.go:371] ha-967493-m04 host status = "Running" (err=<nil>)
	I1101 09:51:13.082107  359549 host.go:66] Checking if "ha-967493-m04" exists ...
	I1101 09:51:13.084759  359549 main.go:143] libmachine: domain ha-967493-m04 has defined MAC address 52:54:00:34:f2:73 in network mk-ha-967493
	I1101 09:51:13.085191  359549 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:34:f2:73", ip: ""} in network mk-ha-967493: {Iface:virbr1 ExpiryTime:2025-11-01 10:48:34 +0000 UTC Type:0 Mac:52:54:00:34:f2:73 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-967493-m04 Clientid:01:52:54:00:34:f2:73}
	I1101 09:51:13.085215  359549 main.go:143] libmachine: domain ha-967493-m04 has defined IP address 192.168.39.217 and MAC address 52:54:00:34:f2:73 in network mk-ha-967493
	I1101 09:51:13.085358  359549 host.go:66] Checking if "ha-967493-m04" exists ...
	I1101 09:51:13.085610  359549 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:51:13.088199  359549 main.go:143] libmachine: domain ha-967493-m04 has defined MAC address 52:54:00:34:f2:73 in network mk-ha-967493
	I1101 09:51:13.088554  359549 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:34:f2:73", ip: ""} in network mk-ha-967493: {Iface:virbr1 ExpiryTime:2025-11-01 10:48:34 +0000 UTC Type:0 Mac:52:54:00:34:f2:73 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:ha-967493-m04 Clientid:01:52:54:00:34:f2:73}
	I1101 09:51:13.088574  359549 main.go:143] libmachine: domain ha-967493-m04 has defined IP address 192.168.39.217 and MAC address 52:54:00:34:f2:73 in network mk-ha-967493
	I1101 09:51:13.088754  359549 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/ha-967493-m04/id_rsa Username:docker}
	I1101 09:51:13.178105  359549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:51:13.200844  359549 status.go:176] ha-967493-m04 status: &{Name:ha-967493-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (88.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (45.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-967493 node start m02 --alsologtostderr -v 5: (44.111475266s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (45.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (378.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 stop --alsologtostderr -v 5
E1101 09:53:10.156789  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/functional-165244/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:53:37.861740  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/functional-165244/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:54:30.856033  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-967493 stop --alsologtostderr -v 5: (4m3.436799082s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 start --wait true --alsologtostderr -v 5
E1101 09:58:10.156535  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/functional-165244/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-967493 start --wait true --alsologtostderr -v 5: (2m15.209629693s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (378.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-967493 node delete m03 --alsologtostderr -v 5: (17.827640933s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (229.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 stop --alsologtostderr -v 5
E1101 09:59:30.855470  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-967493 stop --alsologtostderr -v 5: (3m49.312348767s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-967493 status --alsologtostderr -v 5: exit status 7 (70.832847ms)

                                                
                                                
-- stdout --
	ha-967493
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-967493-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-967493-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:02:27.084044  362780 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:02:27.084305  362780 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:02:27.084316  362780 out.go:374] Setting ErrFile to fd 2...
	I1101 10:02:27.084319  362780 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:02:27.084534  362780 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-344560/.minikube/bin
	I1101 10:02:27.084726  362780 out.go:368] Setting JSON to false
	I1101 10:02:27.084754  362780 mustload.go:66] Loading cluster: ha-967493
	I1101 10:02:27.084887  362780 notify.go:221] Checking for updates...
	I1101 10:02:27.085238  362780 config.go:182] Loaded profile config "ha-967493": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:02:27.085258  362780 status.go:174] checking status of ha-967493 ...
	I1101 10:02:27.087748  362780 status.go:371] ha-967493 host status = "Stopped" (err=<nil>)
	I1101 10:02:27.087768  362780 status.go:384] host is not running, skipping remaining checks
	I1101 10:02:27.087776  362780 status.go:176] ha-967493 status: &{Name:ha-967493 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 10:02:27.087794  362780 status.go:174] checking status of ha-967493-m02 ...
	I1101 10:02:27.089258  362780 status.go:371] ha-967493-m02 host status = "Stopped" (err=<nil>)
	I1101 10:02:27.089271  362780 status.go:384] host is not running, skipping remaining checks
	I1101 10:02:27.089276  362780 status.go:176] ha-967493-m02 status: &{Name:ha-967493-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 10:02:27.089302  362780 status.go:174] checking status of ha-967493-m04 ...
	I1101 10:02:27.090550  362780 status.go:371] ha-967493-m04 host status = "Stopped" (err=<nil>)
	I1101 10:02:27.090579  362780 status.go:384] host is not running, skipping remaining checks
	I1101 10:02:27.090583  362780 status.go:176] ha-967493-m04 status: &{Name:ha-967493-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (229.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (107.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1101 10:03:10.157051  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/functional-165244/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-967493 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m46.875617241s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (107.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (93.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 node add --control-plane --alsologtostderr -v 5
E1101 10:04:30.855523  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:04:33.225100  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/functional-165244/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-967493 node add --control-plane --alsologtostderr -v 5: (1m32.755339152s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-967493 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (93.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.73s)

                                                
                                    
x
+
TestJSONOutput/start/Command (89.01s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-696578 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-696578 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m29.011396748s)
--- PASS: TestJSONOutput/start/Command (89.01s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.78s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-696578 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.78s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-696578 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.15s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-696578 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-696578 --output=json --user=testUser: (7.149500097s)
--- PASS: TestJSONOutput/stop/Command (7.15s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-840294 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-840294 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (79.961566ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"03714f3b-0feb-44e7-806d-8b333e71d223","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-840294] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"45d7264b-9b5b-4f27-b793-27634832e4d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21832"}}
	{"specversion":"1.0","id":"866c5a4e-2d37-4710-8798-dbdea9611a17","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1f0c57bb-1495-41bd-9ee4-f085b261be98","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21832-344560/kubeconfig"}}
	{"specversion":"1.0","id":"6927ffe7-a618-4ea0-87e3-90abf8880276","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-344560/.minikube"}}
	{"specversion":"1.0","id":"a0fc083f-43a9-4c16-b958-e42fb643cdf8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"363d76cf-af7b-4bda-bb82-2beee4a47ddf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"218c543b-64ba-40d1-9ab0-30857a329e44","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-840294" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-840294
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (85.27s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-775715 --driver=kvm2  --container-runtime=crio
E1101 10:07:33.932513  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:08:10.157021  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/functional-165244/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-775715 --driver=kvm2  --container-runtime=crio: (39.054754073s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-777790 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-777790 --driver=kvm2  --container-runtime=crio: (43.531123035s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-775715
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-777790
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-777790" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-777790
helpers_test.go:175: Cleaning up "first-775715" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-775715
--- PASS: TestMinikubeProfile (85.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (23.7s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-728582 --memory=3072 --mount-string /tmp/TestMountStartserial1986786595/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-728582 --memory=3072 --mount-string /tmp/TestMountStartserial1986786595/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (22.698019694s)
--- PASS: TestMountStart/serial/StartWithMountFirst (23.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-728582 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-728582 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (23.06s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-745873 --memory=3072 --mount-string /tmp/TestMountStartserial1986786595/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1101 10:09:30.856160  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-745873 --memory=3072 --mount-string /tmp/TestMountStartserial1986786595/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (22.055589285s)
--- PASS: TestMountStart/serial/StartWithMountSecond (23.06s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-745873 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-745873 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.32s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-728582 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-745873 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-745873 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.32s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.38s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-745873
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-745873: (1.381563396s)
--- PASS: TestMountStart/serial/Stop (1.38s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.01s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-745873
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-745873: (20.005250233s)
--- PASS: TestMountStart/serial/RestartStopped (21.01s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-745873 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-745873 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (104.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-629778 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-629778 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m44.501326029s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-629778 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (104.85s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-629778 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-629778 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-629778 -- rollout status deployment/busybox: (2.731079421s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-629778 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-629778 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-629778 -- exec busybox-7b57f96db7-99t6f -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-629778 -- exec busybox-7b57f96db7-bp2gl -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-629778 -- exec busybox-7b57f96db7-99t6f -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-629778 -- exec busybox-7b57f96db7-bp2gl -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-629778 -- exec busybox-7b57f96db7-99t6f -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-629778 -- exec busybox-7b57f96db7-bp2gl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.46s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-629778 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-629778 -- exec busybox-7b57f96db7-99t6f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-629778 -- exec busybox-7b57f96db7-99t6f -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-629778 -- exec busybox-7b57f96db7-bp2gl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-629778 -- exec busybox-7b57f96db7-bp2gl -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (42.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-629778 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-629778 -v=5 --alsologtostderr: (41.742678344s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-629778 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (42.22s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-629778 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.48s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-629778 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-629778 cp testdata/cp-test.txt multinode-629778:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-629778 ssh -n multinode-629778 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-629778 cp multinode-629778:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1764469644/001/cp-test_multinode-629778.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-629778 ssh -n multinode-629778 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-629778 cp multinode-629778:/home/docker/cp-test.txt multinode-629778-m02:/home/docker/cp-test_multinode-629778_multinode-629778-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-629778 ssh -n multinode-629778 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-629778 ssh -n multinode-629778-m02 "sudo cat /home/docker/cp-test_multinode-629778_multinode-629778-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-629778 cp multinode-629778:/home/docker/cp-test.txt multinode-629778-m03:/home/docker/cp-test_multinode-629778_multinode-629778-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-629778 ssh -n multinode-629778 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-629778 ssh -n multinode-629778-m03 "sudo cat /home/docker/cp-test_multinode-629778_multinode-629778-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-629778 cp testdata/cp-test.txt multinode-629778-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-629778 ssh -n multinode-629778-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-629778 cp multinode-629778-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1764469644/001/cp-test_multinode-629778-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-629778 ssh -n multinode-629778-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-629778 cp multinode-629778-m02:/home/docker/cp-test.txt multinode-629778:/home/docker/cp-test_multinode-629778-m02_multinode-629778.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-629778 ssh -n multinode-629778-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-629778 ssh -n multinode-629778 "sudo cat /home/docker/cp-test_multinode-629778-m02_multinode-629778.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-629778 cp multinode-629778-m02:/home/docker/cp-test.txt multinode-629778-m03:/home/docker/cp-test_multinode-629778-m02_multinode-629778-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-629778 ssh -n multinode-629778-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-629778 ssh -n multinode-629778-m03 "sudo cat /home/docker/cp-test_multinode-629778-m02_multinode-629778-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-629778 cp testdata/cp-test.txt multinode-629778-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-629778 ssh -n multinode-629778-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-629778 cp multinode-629778-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1764469644/001/cp-test_multinode-629778-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-629778 ssh -n multinode-629778-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-629778 cp multinode-629778-m03:/home/docker/cp-test.txt multinode-629778:/home/docker/cp-test_multinode-629778-m03_multinode-629778.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-629778 ssh -n multinode-629778-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-629778 ssh -n multinode-629778 "sudo cat /home/docker/cp-test_multinode-629778-m03_multinode-629778.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-629778 cp multinode-629778-m03:/home/docker/cp-test.txt multinode-629778-m02:/home/docker/cp-test_multinode-629778-m03_multinode-629778-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-629778 ssh -n multinode-629778-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-629778 ssh -n multinode-629778-m02 "sudo cat /home/docker/cp-test_multinode-629778-m03_multinode-629778-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.33s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-629778 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-629778 node stop m03: (1.828829254s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-629778 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-629778 status: exit status 7 (359.038722ms)

                                                
                                                
-- stdout --
	multinode-629778
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-629778-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-629778-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-629778 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-629778 status --alsologtostderr: exit status 7 (355.207681ms)

                                                
                                                
-- stdout --
	multinode-629778
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-629778-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-629778-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:12:50.107383  368521 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:12:50.107636  368521 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:12:50.107645  368521 out.go:374] Setting ErrFile to fd 2...
	I1101 10:12:50.107649  368521 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:12:50.107876  368521 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-344560/.minikube/bin
	I1101 10:12:50.108061  368521 out.go:368] Setting JSON to false
	I1101 10:12:50.108088  368521 mustload.go:66] Loading cluster: multinode-629778
	I1101 10:12:50.108242  368521 notify.go:221] Checking for updates...
	I1101 10:12:50.108438  368521 config.go:182] Loaded profile config "multinode-629778": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:12:50.108451  368521 status.go:174] checking status of multinode-629778 ...
	I1101 10:12:50.110682  368521 status.go:371] multinode-629778 host status = "Running" (err=<nil>)
	I1101 10:12:50.110702  368521 host.go:66] Checking if "multinode-629778" exists ...
	I1101 10:12:50.113301  368521 main.go:143] libmachine: domain multinode-629778 has defined MAC address 52:54:00:69:2d:9f in network mk-multinode-629778
	I1101 10:12:50.113757  368521 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:69:2d:9f", ip: ""} in network mk-multinode-629778: {Iface:virbr1 ExpiryTime:2025-11-01 11:10:25 +0000 UTC Type:0 Mac:52:54:00:69:2d:9f Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:multinode-629778 Clientid:01:52:54:00:69:2d:9f}
	I1101 10:12:50.113786  368521 main.go:143] libmachine: domain multinode-629778 has defined IP address 192.168.39.7 and MAC address 52:54:00:69:2d:9f in network mk-multinode-629778
	I1101 10:12:50.113949  368521 host.go:66] Checking if "multinode-629778" exists ...
	I1101 10:12:50.114159  368521 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:12:50.116207  368521 main.go:143] libmachine: domain multinode-629778 has defined MAC address 52:54:00:69:2d:9f in network mk-multinode-629778
	I1101 10:12:50.116563  368521 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:69:2d:9f", ip: ""} in network mk-multinode-629778: {Iface:virbr1 ExpiryTime:2025-11-01 11:10:25 +0000 UTC Type:0 Mac:52:54:00:69:2d:9f Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:multinode-629778 Clientid:01:52:54:00:69:2d:9f}
	I1101 10:12:50.116590  368521 main.go:143] libmachine: domain multinode-629778 has defined IP address 192.168.39.7 and MAC address 52:54:00:69:2d:9f in network mk-multinode-629778
	I1101 10:12:50.116713  368521 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/multinode-629778/id_rsa Username:docker}
	I1101 10:12:50.202053  368521 ssh_runner.go:195] Run: systemctl --version
	I1101 10:12:50.209040  368521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:12:50.227525  368521 kubeconfig.go:125] found "multinode-629778" server: "https://192.168.39.7:8443"
	I1101 10:12:50.227576  368521 api_server.go:166] Checking apiserver status ...
	I1101 10:12:50.227617  368521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 10:12:50.251020  368521 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1420/cgroup
	W1101 10:12:50.266084  368521 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1420/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1101 10:12:50.266157  368521 ssh_runner.go:195] Run: ls
	I1101 10:12:50.271968  368521 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I1101 10:12:50.277306  368521 api_server.go:279] https://192.168.39.7:8443/healthz returned 200:
	ok
	I1101 10:12:50.277333  368521 status.go:463] multinode-629778 apiserver status = Running (err=<nil>)
	I1101 10:12:50.277344  368521 status.go:176] multinode-629778 status: &{Name:multinode-629778 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 10:12:50.277362  368521 status.go:174] checking status of multinode-629778-m02 ...
	I1101 10:12:50.278884  368521 status.go:371] multinode-629778-m02 host status = "Running" (err=<nil>)
	I1101 10:12:50.278910  368521 host.go:66] Checking if "multinode-629778-m02" exists ...
	I1101 10:12:50.281276  368521 main.go:143] libmachine: domain multinode-629778-m02 has defined MAC address 52:54:00:c5:31:30 in network mk-multinode-629778
	I1101 10:12:50.281711  368521 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c5:31:30", ip: ""} in network mk-multinode-629778: {Iface:virbr1 ExpiryTime:2025-11-01 11:11:25 +0000 UTC Type:0 Mac:52:54:00:c5:31:30 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-629778-m02 Clientid:01:52:54:00:c5:31:30}
	I1101 10:12:50.281741  368521 main.go:143] libmachine: domain multinode-629778-m02 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:31:30 in network mk-multinode-629778
	I1101 10:12:50.281892  368521 host.go:66] Checking if "multinode-629778-m02" exists ...
	I1101 10:12:50.282097  368521 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 10:12:50.284610  368521 main.go:143] libmachine: domain multinode-629778-m02 has defined MAC address 52:54:00:c5:31:30 in network mk-multinode-629778
	I1101 10:12:50.285058  368521 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:c5:31:30", ip: ""} in network mk-multinode-629778: {Iface:virbr1 ExpiryTime:2025-11-01 11:11:25 +0000 UTC Type:0 Mac:52:54:00:c5:31:30 Iaid: IPaddr:192.168.39.212 Prefix:24 Hostname:multinode-629778-m02 Clientid:01:52:54:00:c5:31:30}
	I1101 10:12:50.285095  368521 main.go:143] libmachine: domain multinode-629778-m02 has defined IP address 192.168.39.212 and MAC address 52:54:00:c5:31:30 in network mk-multinode-629778
	I1101 10:12:50.285239  368521 sshutil.go:53] new ssh client: &{IP:192.168.39.212 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21832-344560/.minikube/machines/multinode-629778-m02/id_rsa Username:docker}
	I1101 10:12:50.375112  368521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 10:12:50.396923  368521 status.go:176] multinode-629778-m02 status: &{Name:multinode-629778-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1101 10:12:50.396970  368521 status.go:174] checking status of multinode-629778-m03 ...
	I1101 10:12:50.398849  368521 status.go:371] multinode-629778-m03 host status = "Stopped" (err=<nil>)
	I1101 10:12:50.398884  368521 status.go:384] host is not running, skipping remaining checks
	I1101 10:12:50.398890  368521 status.go:176] multinode-629778-m03 status: &{Name:multinode-629778-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.54s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (46.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-629778 node start m03 -v=5 --alsologtostderr
E1101 10:13:10.161035  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/functional-165244/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-629778 node start m03 -v=5 --alsologtostderr: (45.911216902s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-629778 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (46.44s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (306.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-629778
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-629778
E1101 10:14:30.856369  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-629778: (2m53.873245893s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-629778 --wait=true -v=5 --alsologtostderr
E1101 10:18:10.156344  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/functional-165244/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-629778 --wait=true -v=5 --alsologtostderr: (2m12.400965976s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-629778
--- PASS: TestMultiNode/serial/RestartKeepsNodes (306.41s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-629778 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-629778 node delete m03: (2.205922209s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-629778 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.70s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (159.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-629778 stop
E1101 10:19:30.855530  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:21:13.228951  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/functional-165244/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-629778 stop: (2m38.948181222s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-629778 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-629778 status: exit status 7 (69.694893ms)

                                                
                                                
-- stdout --
	multinode-629778
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-629778-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-629778 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-629778 status --alsologtostderr: exit status 7 (67.081089ms)

                                                
                                                
-- stdout --
	multinode-629778
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-629778-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:21:25.026219  371339 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:21:25.026479  371339 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:21:25.026488  371339 out.go:374] Setting ErrFile to fd 2...
	I1101 10:21:25.026492  371339 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:21:25.026718  371339 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-344560/.minikube/bin
	I1101 10:21:25.026912  371339 out.go:368] Setting JSON to false
	I1101 10:21:25.026938  371339 mustload.go:66] Loading cluster: multinode-629778
	I1101 10:21:25.027051  371339 notify.go:221] Checking for updates...
	I1101 10:21:25.027290  371339 config.go:182] Loaded profile config "multinode-629778": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:21:25.027303  371339 status.go:174] checking status of multinode-629778 ...
	I1101 10:21:25.029467  371339 status.go:371] multinode-629778 host status = "Stopped" (err=<nil>)
	I1101 10:21:25.029487  371339 status.go:384] host is not running, skipping remaining checks
	I1101 10:21:25.029495  371339 status.go:176] multinode-629778 status: &{Name:multinode-629778 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 10:21:25.029521  371339 status.go:174] checking status of multinode-629778-m02 ...
	I1101 10:21:25.031074  371339 status.go:371] multinode-629778-m02 host status = "Stopped" (err=<nil>)
	I1101 10:21:25.031091  371339 status.go:384] host is not running, skipping remaining checks
	I1101 10:21:25.031097  371339 status.go:176] multinode-629778-m02 status: &{Name:multinode-629778-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (159.09s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (119.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-629778 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1101 10:23:10.157028  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/functional-165244/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-629778 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m58.966537573s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-629778 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (119.47s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-629778
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-629778-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-629778-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (85.150681ms)

                                                
                                                
-- stdout --
	* [multinode-629778-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21832
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21832-344560/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-344560/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-629778-m02' is duplicated with machine name 'multinode-629778-m02' in profile 'multinode-629778'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-629778-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-629778-m03 --driver=kvm2  --container-runtime=crio: (43.620328267s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-629778
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-629778: exit status 80 (222.48682ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-629778 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-629778-m03 already exists in multinode-629778-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-629778-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.85s)

                                                
                                    
x
+
TestScheduledStopUnix (114.23s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-072289 --memory=3072 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-072289 --memory=3072 --driver=kvm2  --container-runtime=crio: (42.444031062s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-072289 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-072289 -n scheduled-stop-072289
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-072289 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1101 10:27:02.417478  348518 retry.go:31] will retry after 92.211µs: open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/scheduled-stop-072289/pid: no such file or directory
I1101 10:27:02.418666  348518 retry.go:31] will retry after 99.735µs: open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/scheduled-stop-072289/pid: no such file or directory
I1101 10:27:02.419836  348518 retry.go:31] will retry after 120.92µs: open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/scheduled-stop-072289/pid: no such file or directory
I1101 10:27:02.420957  348518 retry.go:31] will retry after 416.211µs: open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/scheduled-stop-072289/pid: no such file or directory
I1101 10:27:02.422125  348518 retry.go:31] will retry after 521.746µs: open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/scheduled-stop-072289/pid: no such file or directory
I1101 10:27:02.423287  348518 retry.go:31] will retry after 884.934µs: open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/scheduled-stop-072289/pid: no such file or directory
I1101 10:27:02.424476  348518 retry.go:31] will retry after 1.559273ms: open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/scheduled-stop-072289/pid: no such file or directory
I1101 10:27:02.426735  348518 retry.go:31] will retry after 1.030413ms: open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/scheduled-stop-072289/pid: no such file or directory
I1101 10:27:02.427913  348518 retry.go:31] will retry after 2.130069ms: open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/scheduled-stop-072289/pid: no such file or directory
I1101 10:27:02.431147  348518 retry.go:31] will retry after 2.678585ms: open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/scheduled-stop-072289/pid: no such file or directory
I1101 10:27:02.434462  348518 retry.go:31] will retry after 4.846055ms: open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/scheduled-stop-072289/pid: no such file or directory
I1101 10:27:02.439744  348518 retry.go:31] will retry after 12.852787ms: open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/scheduled-stop-072289/pid: no such file or directory
I1101 10:27:02.453075  348518 retry.go:31] will retry after 16.679857ms: open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/scheduled-stop-072289/pid: no such file or directory
I1101 10:27:02.470472  348518 retry.go:31] will retry after 26.628362ms: open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/scheduled-stop-072289/pid: no such file or directory
I1101 10:27:02.497761  348518 retry.go:31] will retry after 22.902112ms: open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/scheduled-stop-072289/pid: no such file or directory
I1101 10:27:02.521368  348518 retry.go:31] will retry after 54.211628ms: open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/scheduled-stop-072289/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-072289 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-072289 -n scheduled-stop-072289
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-072289
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-072289 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1101 10:28:10.164752  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/functional-165244/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-072289
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-072289: exit status 7 (65.467551ms)

                                                
                                                
-- stdout --
	scheduled-stop-072289
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-072289 -n scheduled-stop-072289
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-072289 -n scheduled-stop-072289: exit status 7 (65.649761ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-072289" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-072289
--- PASS: TestScheduledStopUnix (114.23s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (114.11s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2436737302 start -p running-upgrade-142569 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2436737302 start -p running-upgrade-142569 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m8.887356819s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-142569 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-142569 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (43.822227175s)
helpers_test.go:175: Cleaning up "running-upgrade-142569" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-142569
--- PASS: TestRunningBinaryUpgrade (114.11s)

                                                
                                    
x
+
TestKubernetesUpgrade (200.59s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-246487 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-246487 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m5.77688543s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-246487
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-246487: (2.150898453s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-246487 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-246487 status --format={{.Host}}: exit status 7 (78.420582ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-246487 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1101 10:29:30.855878  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-246487 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (55.809625093s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-246487 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-246487 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-246487 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (103.753613ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-246487] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21832
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21832-344560/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-344560/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-246487
	    minikube start -p kubernetes-upgrade-246487 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2464872 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-246487 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-246487 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-246487 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m15.5734215s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-246487" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-246487
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-246487: (1.030547867s)
--- PASS: TestKubernetesUpgrade (200.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-146388 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-146388 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (100.814829ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-146388] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21832
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21832-344560/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-344560/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (88.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-146388 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-146388 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m28.629456909s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-146388 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (88.97s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (168.02s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3318850160 start -p stopped-upgrade-328796 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3318850160 start -p stopped-upgrade-328796 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m49.81133958s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3318850160 -p stopped-upgrade-328796 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3318850160 -p stopped-upgrade-328796 stop: (1.713723168s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-328796 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-328796 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (56.498436298s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (168.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (30.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-146388 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-146388 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (28.839786159s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-146388 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-146388 status -o json: exit status 2 (238.301004ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-146388","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-146388
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (30.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (53.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-146388 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-146388 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (53.029514205s)
--- PASS: TestNoKubernetes/serial/Start (53.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.39s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-328796
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-328796: (1.392273388s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.39s)

                                                
                                    
x
+
TestPause/serial/Start (97.96s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-876158 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-876158 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m37.9613511s)
--- PASS: TestPause/serial/Start (97.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-146388 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-146388 "sudo systemctl is-active --quiet service kubelet": exit status 1 (168.372867ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-146388
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-146388: (1.34702319s)
--- PASS: TestNoKubernetes/serial/Stop (1.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (53.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-146388 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-146388 --driver=kvm2  --container-runtime=crio: (53.964543475s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (53.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-543676 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-543676 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (972.866896ms)

                                                
                                                
-- stdout --
	* [false-543676] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21832
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21832-344560/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-344560/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 10:31:40.938997  377717 out.go:360] Setting OutFile to fd 1 ...
	I1101 10:31:40.939243  377717 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:31:40.939253  377717 out.go:374] Setting ErrFile to fd 2...
	I1101 10:31:40.939257  377717 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 10:31:40.939473  377717 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21832-344560/.minikube/bin
	I1101 10:31:40.939962  377717 out.go:368] Setting JSON to false
	I1101 10:31:40.940946  377717 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8049,"bootTime":1761985052,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 10:31:40.941050  377717 start.go:143] virtualization: kvm guest
	I1101 10:31:40.943362  377717 out.go:179] * [false-543676] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 10:31:40.945203  377717 notify.go:221] Checking for updates...
	I1101 10:31:40.945217  377717 out.go:179]   - MINIKUBE_LOCATION=21832
	I1101 10:31:40.946830  377717 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 10:31:40.948353  377717 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21832-344560/kubeconfig
	I1101 10:31:40.949794  377717 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21832-344560/.minikube
	I1101 10:31:40.951207  377717 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 10:31:40.952444  377717 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 10:31:40.954289  377717 config.go:182] Loaded profile config "NoKubernetes-146388": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1101 10:31:40.954441  377717 config.go:182] Loaded profile config "force-systemd-env-112765": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:31:40.954573  377717 config.go:182] Loaded profile config "pause-876158": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 10:31:40.954698  377717 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 10:31:41.839253  377717 out.go:179] * Using the kvm2 driver based on user configuration
	I1101 10:31:41.840571  377717 start.go:309] selected driver: kvm2
	I1101 10:31:41.840595  377717 start.go:930] validating driver "kvm2" against <nil>
	I1101 10:31:41.840609  377717 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 10:31:41.842534  377717 out.go:203] 
	W1101 10:31:41.844064  377717 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1101 10:31:41.845330  377717 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-543676 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-543676

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-543676

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-543676

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-543676

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-543676

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-543676

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-543676

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-543676

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-543676

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-543676

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543676"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543676"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543676"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-543676

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543676"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543676"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-543676" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-543676" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-543676" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-543676" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-543676" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-543676" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-543676" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-543676" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543676"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543676"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543676"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543676"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543676"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-543676" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-543676" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-543676" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543676"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543676"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543676"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543676"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543676"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-543676

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543676"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543676"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543676"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543676"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543676"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543676"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543676"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543676"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543676"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543676"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543676"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543676"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543676"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543676"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543676"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543676"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543676"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-543676"

                                                
                                                
----------------------- debugLogs end: false-543676 [took: 4.259044671s] --------------------------------
helpers_test.go:175: Cleaning up "false-543676" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-543676
--- PASS: TestNetworkPlugins/group/false (5.43s)

                                                
                                    
x
+
TestISOImage/Setup (53.43s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:46: (dbg) Run:  out/minikube-linux-amd64 start -p guest-651909 --no-kubernetes --driver=kvm2  --container-runtime=crio
iso_test.go:46: (dbg) Done: out/minikube-linux-amd64 start -p guest-651909 --no-kubernetes --driver=kvm2  --container-runtime=crio: (53.425426229s)
--- PASS: TestISOImage/Setup (53.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-146388 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-146388 "sudo systemctl is-active --quiet service kubelet": exit status 1 (187.348274ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.22s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-651909 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.22s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-651909 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-651909 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-651909 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-651909 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-651909 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-651909 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-651909 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-651909 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-651909 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-651909 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (95.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-152855 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-152855 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m35.903219738s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (95.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (117.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-122065 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-122065 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m57.39056752s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (117.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (107.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-766429 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1101 10:34:30.855984  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-766429 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m47.092250276s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (107.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-152855 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d7306d02-40a4-4bab-b59b-82939954e742] Pending
helpers_test.go:352: "busybox" [d7306d02-40a4-4bab-b59b-82939954e742] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d7306d02-40a4-4bab-b59b-82939954e742] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.004723844s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-152855 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-152855 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-152855 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.094885699s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-152855 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (87.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-152855 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-152855 --alsologtostderr -v=3: (1m27.95122149s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (87.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-122065 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [7b7eaeb8-bdad-40dd-8cbb-ac29e923483d] Pending
helpers_test.go:352: "busybox" [7b7eaeb8-bdad-40dd-8cbb-ac29e923483d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [7b7eaeb8-bdad-40dd-8cbb-ac29e923483d] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.006092235s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-122065 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-766429 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [bacab70f-597e-4104-887b-d595a16b054a] Pending
helpers_test.go:352: "busybox" [bacab70f-597e-4104-887b-d595a16b054a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [bacab70f-597e-4104-887b-d595a16b054a] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004899801s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-766429 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-122065 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-122065 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.040419968s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-122065 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-766429 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-766429 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.082475032s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-766429 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (83.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-122065 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-122065 --alsologtostderr -v=3: (1m23.287918238s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (83.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (85.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-766429 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-766429 --alsologtostderr -v=3: (1m25.799070299s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (85.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (90.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-586066 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-586066 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m30.312431392s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (90.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-152855 -n old-k8s-version-152855
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-152855 -n old-k8s-version-152855: exit status 7 (72.874903ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-152855 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (121.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-152855 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-152855 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (2m1.030068896s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-152855 -n old-k8s-version-152855
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (121.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-122065 -n no-preload-122065
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-122065 -n no-preload-122065: exit status 7 (71.026876ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-122065 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (76.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-122065 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-122065 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m16.53759025s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-122065 -n no-preload-122065
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (76.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-766429 -n embed-certs-766429
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-766429 -n embed-certs-766429: exit status 7 (73.213969ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-766429 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (80.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-766429 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1101 10:37:53.231357  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/functional-165244/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:38:10.156307  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/functional-165244/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-766429 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m20.008825158s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-766429 -n embed-certs-766429
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (80.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-586066 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [aaf9c260-4568-476c-8552-b3345e8110bd] Pending
helpers_test.go:352: "busybox" [aaf9c260-4568-476c-8552-b3345e8110bd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [aaf9c260-4568-476c-8552-b3345e8110bd] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.007432059s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-586066 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-586066 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-586066 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.477687562s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-586066 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (89.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-586066 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-586066 --alsologtostderr -v=3: (1m29.173065297s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (89.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-lbn8v" [6d585dae-34e0-4bba-99d0-402479fa0d21] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00539046s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-lbn8v" [6d585dae-34e0-4bba-99d0-402479fa0d21] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004196366s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-122065 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zhkh5" [d63d6e2e-9529-4a5c-a193-c72af3b5bd25] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zhkh5" [d63d6e2e-9529-4a5c-a193-c72af3b5bd25] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.004053976s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-122065 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-122065 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-122065 -n no-preload-122065
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-122065 -n no-preload-122065: exit status 2 (244.377127ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-122065 -n no-preload-122065
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-122065 -n no-preload-122065: exit status 2 (235.869853ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-122065 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-122065 -n no-preload-122065
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-122065 -n no-preload-122065
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (48.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-112033 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-112033 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (48.968926983s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (48.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zhkh5" [d63d6e2e-9529-4a5c-a193-c72af3b5bd25] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005880212s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-766429 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-9wvbg" [b6726761-e99c-4f7c-a8a7-9c4fb4679dae] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00534976s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-766429 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-766429 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-766429 --alsologtostderr -v=1: (1.032433444s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-766429 -n embed-certs-766429
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-766429 -n embed-certs-766429: exit status 2 (235.518804ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-766429 -n embed-certs-766429
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-766429 -n embed-certs-766429: exit status 2 (236.536749ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-766429 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-766429 -n embed-certs-766429
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-766429 -n embed-certs-766429
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-9wvbg" [b6726761-e99c-4f7c-a8a7-9c4fb4679dae] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008041305s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-152855 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (92.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-543676 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-543676 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m32.866379206s)
--- PASS: TestNetworkPlugins/group/auto/Start (92.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-152855 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-152855 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-152855 -n old-k8s-version-152855
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-152855 -n old-k8s-version-152855: exit status 2 (219.893469ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-152855 -n old-k8s-version-152855
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-152855 -n old-k8s-version-152855: exit status 2 (236.880093ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-152855 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-152855 -n old-k8s-version-152855
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-152855 -n old-k8s-version-152855
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (89.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-543676 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E1101 10:39:30.855752  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-543676 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m29.82534847s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (89.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-112033 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-112033 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.079286227s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.69s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-112033 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-112033 --alsologtostderr -v=3: (7.688868488s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-112033 -n newest-cni-112033
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-112033 -n newest-cni-112033: exit status 7 (88.761248ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-112033 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (44.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-112033 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-112033 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (44.004354526s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-112033 -n newest-cni-112033
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (44.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-586066 -n default-k8s-diff-port-586066
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-586066 -n default-k8s-diff-port-586066: exit status 7 (95.436606ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-586066 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (62.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-586066 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1101 10:40:35.558178  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/old-k8s-version-152855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:40:35.564833  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/old-k8s-version-152855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:40:35.576324  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/old-k8s-version-152855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:40:35.597764  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/old-k8s-version-152855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:40:35.639172  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/old-k8s-version-152855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:40:35.721296  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/old-k8s-version-152855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:40:35.882653  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/old-k8s-version-152855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:40:36.204638  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/old-k8s-version-152855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:40:36.847620  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/old-k8s-version-152855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:40:38.129576  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/old-k8s-version-152855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:40:40.691426  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/old-k8s-version-152855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:40:45.813018  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/old-k8s-version-152855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:40:53.936425  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-586066 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m2.576435906s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-586066 -n default-k8s-diff-port-586066
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (62.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-112033 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-112033 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-112033 --alsologtostderr -v=1: (1.0538597s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-112033 -n newest-cni-112033
E1101 10:40:56.054887  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/old-k8s-version-152855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-112033 -n newest-cni-112033: exit status 2 (273.359211ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-112033 -n newest-cni-112033
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-112033 -n newest-cni-112033: exit status 2 (359.250134ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-112033 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 unpause -p newest-cni-112033 --alsologtostderr -v=1: (1.408711088s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-112033 -n newest-cni-112033
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-112033 -n newest-cni-112033
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-543676 "pgrep -a kubelet"
I1101 10:40:55.499121  348518 config.go:182] Loaded profile config "auto-543676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-543676 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-p5wpn" [40ef4c97-29b7-410e-84de-9e4d9667b02a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-p5wpn" [40ef4c97-29b7-410e-84de-9e4d9667b02a] Running
E1101 10:41:05.205687  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/no-preload-122065/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:41:05.212097  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/no-preload-122065/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:41:05.223485  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/no-preload-122065/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:41:05.244943  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/no-preload-122065/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:41:05.286328  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/no-preload-122065/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:41:05.367799  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/no-preload-122065/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.005955706s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-4jck8" [769d4246-1ff1-4337-a287-d4aebab31810] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006310525s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-543676 "pgrep -a kubelet"
E1101 10:41:05.529549  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/no-preload-122065/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1101 10:41:05.736351  348518 config.go:182] Loaded profile config "kindnet-543676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-543676 replace --force -f testdata/netcat-deployment.yaml
E1101 10:41:05.850817  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/no-preload-122065/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-srmxd" [ac69baa1-80b4-4648-ba90-27ea802fefab] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1101 10:41:06.492827  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/no-preload-122065/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:41:07.774561  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/no-preload-122065/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-srmxd" [ac69baa1-80b4-4648-ba90-27ea802fefab] Running
E1101 10:41:15.458776  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/no-preload-122065/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.019673153s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-543676 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-543676 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-543676 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-543676 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-543676 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-543676 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1101 10:41:16.536885  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/old-k8s-version-152855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4ftrm" [6dece456-4773-481a-bc8a-50c485d35ef2] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4ftrm" [6dece456-4773-481a-bc8a-50c485d35ef2] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.004822677s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (10.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (70.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-543676 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E1101 10:41:25.701511  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/no-preload-122065/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-543676 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m10.841786271s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (70.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4ftrm" [6dece456-4773-481a-bc8a-50c485d35ef2] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006052984s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-586066 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (96.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-543676 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-543676 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m36.057600474s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (96.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-586066 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-586066 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-586066 --alsologtostderr -v=1: (1.137778659s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-586066 -n default-k8s-diff-port-586066
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-586066 -n default-k8s-diff-port-586066: exit status 2 (242.819909ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-586066 -n default-k8s-diff-port-586066
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-586066 -n default-k8s-diff-port-586066: exit status 2 (254.108264ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-586066 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-586066 -n default-k8s-diff-port-586066
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-586066 -n default-k8s-diff-port-586066
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (95.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-543676 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E1101 10:41:46.183456  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/no-preload-122065/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:41:57.499358  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/old-k8s-version-152855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:42:27.145391  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/no-preload-122065/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-543676 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m35.413845806s)
--- PASS: TestNetworkPlugins/group/flannel/Start (95.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-543676 "pgrep -a kubelet"
I1101 10:42:35.236553  348518 config.go:182] Loaded profile config "custom-flannel-543676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-543676 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5kp7b" [f8d5dd47-ded2-4830-bf71-e8acd8e710b6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5kp7b" [f8d5dd47-ded2-4830-bf71-e8acd8e710b6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.003706765s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-543676 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-543676 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-543676 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (80.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-543676 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-543676 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m20.930514019s)
--- PASS: TestNetworkPlugins/group/bridge/Start (80.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-543676 "pgrep -a kubelet"
I1101 10:43:10.122571  348518 config.go:182] Loaded profile config "enable-default-cni-543676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-543676 replace --force -f testdata/netcat-deployment.yaml
E1101 10:43:10.157066  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/functional-165244/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-sjjbn" [4e124ffa-474e-49e5-9b8c-4593be4de4d0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-sjjbn" [4e124ffa-474e-49e5-9b8c-4593be4de4d0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.005290427s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-r8gvk" [759eda89-8363-4606-96ab-b32f1f05078e] Running
E1101 10:43:19.420940  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/old-k8s-version-152855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004875651s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-543676 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-543676 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-543676 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-543676 "pgrep -a kubelet"
I1101 10:43:24.747786  348518 config.go:182] Loaded profile config "flannel-543676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-543676 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zlcfm" [06fca9a7-8ee0-46df-8cda-df5c9f6063f7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-zlcfm" [06fca9a7-8ee0-46df-8cda-df5c9f6063f7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.004365757s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-543676 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-543676 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-543676 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:96: (dbg) Run:  out/minikube-linux-amd64 -p guest-651909 ssh "df -t ext4 /data | grep /data"
--- PASS: TestISOImage/PersistentMounts//data (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.2s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:96: (dbg) Run:  out/minikube-linux-amd64 -p guest-651909 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.20s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:96: (dbg) Run:  out/minikube-linux-amd64 -p guest-651909 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:96: (dbg) Run:  out/minikube-linux-amd64 -p guest-651909 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:96: (dbg) Run:  out/minikube-linux-amd64 -p guest-651909 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:96: (dbg) Run:  out/minikube-linux-amd64 -p guest-651909 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:96: (dbg) Run:  out/minikube-linux-amd64 -p guest-651909 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.18s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.18s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p guest-651909 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
--- PASS: TestISOImage/eBPFSupport (0.18s)
E1101 10:43:41.986385  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/default-k8s-diff-port-586066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:43:42.628411  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/default-k8s-diff-port-586066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:43:43.910250  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/default-k8s-diff-port-586066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:43:46.471601  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/default-k8s-diff-port-586066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:43:49.067582  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/no-preload-122065/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 10:43:51.592991  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/default-k8s-diff-port-586066/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-543676 "pgrep -a kubelet"
I1101 10:44:23.683134  348518 config.go:182] Loaded profile config "bridge-543676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-543676 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-pz5j5" [d47ab286-b94d-4cf6-b349-0cbbd8db38ae] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-pz5j5" [d47ab286-b94d-4cf6-b349-0cbbd8db38ae] Running
E1101 10:44:30.855535  348518 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21832-344560/.minikube/profiles/addons-610936/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.005286668s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-543676 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-543676 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-543676 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (40/337)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.32
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
127 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
128 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
129 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
130 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestFunctionalNewestKubernetes 0
157 TestGvisorAddon 0
179 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
267 TestStartStop/group/disable-driver-mounts 0.17
277 TestNetworkPlugins/group/kubenet 3.63
285 TestNetworkPlugins/group/cilium 5.32
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.32s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-610936 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.32s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-114935" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-114935
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-543676 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-543676

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-543676

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-543676

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-543676

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-543676

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-543676

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-543676

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-543676

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-543676

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-543676

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543676"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543676"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543676"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-543676

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543676"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543676"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-543676" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-543676" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-543676" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-543676" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-543676" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-543676" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-543676" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-543676" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543676"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543676"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543676"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543676"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543676"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-543676" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-543676" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-543676" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543676"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543676"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543676"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543676"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543676"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-543676

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543676"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543676"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543676"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543676"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543676"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543676"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543676"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543676"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543676"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543676"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543676"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543676"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543676"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543676"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543676"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543676"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543676"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-543676"

                                                
                                                
----------------------- debugLogs end: kubenet-543676 [took: 3.445711397s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-543676" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-543676
--- SKIP: TestNetworkPlugins/group/kubenet (3.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-543676 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-543676

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-543676

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-543676

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-543676

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-543676

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-543676

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-543676

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-543676

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-543676

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-543676

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543676"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543676"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543676"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-543676

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543676"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543676"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-543676" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-543676" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-543676" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-543676" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-543676" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-543676" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-543676" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-543676" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543676"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543676"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543676"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543676"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543676"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-543676

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-543676

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-543676" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-543676" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-543676

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-543676

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-543676" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-543676" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-543676" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-543676" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-543676" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543676"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543676"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543676"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543676"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543676"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-543676

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543676"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543676"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543676"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543676"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543676"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543676"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543676"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543676"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543676"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543676"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543676"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543676"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543676"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543676"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543676"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543676"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543676"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-543676" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-543676"

                                                
                                                
----------------------- debugLogs end: cilium-543676 [took: 5.124486188s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-543676" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-543676
--- SKIP: TestNetworkPlugins/group/cilium (5.32s)

                                                
                                    
Copied to clipboard