Test Report: KVM_Linux_crio 21835

                    
                      73e6d6839bae6cdde957e116826ac4e2fc7d714a:2025-11-01:42153
                    
                

Test fail (4/343)

Order failed test Duration
37 TestAddons/parallel/Ingress 157.05
244 TestPreload 151.48
252 TestKubernetesUpgrade 986.65
300 TestPause/serial/SecondStartNoReconfiguration 54.14
x
+
TestAddons/parallel/Ingress (157.05s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-468489 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-468489 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-468489 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [02be2896-2e22-4268-9b74-1264e195dc37] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [02be2896-2e22-4268-9b74-1264e195dc37] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.013745091s
I1101 08:32:50.734002    9793 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-468489 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-468489 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m15.070451474s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-468489 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-468489 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.108
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-468489 -n addons-468489
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-468489 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-468489 logs -n 25: (1.352153425s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                  ARGS                                                                                                                                                                                                                                  │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-only-362299                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-only-362299 │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │ 01 Nov 25 08:29 UTC │
	│ start   │ --download-only -p binary-mirror-153470 --alsologtostderr --binary-mirror http://127.0.0.1:38639 --driver=kvm2  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-153470 │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │                     │
	│ delete  │ -p binary-mirror-153470                                                                                                                                                                                                                                                                                                                                                                                                                                                │ binary-mirror-153470 │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │ 01 Nov 25 08:29 UTC │
	│ addons  │ disable dashboard -p addons-468489                                                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-468489        │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │                     │
	│ addons  │ enable dashboard -p addons-468489                                                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-468489        │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │                     │
	│ start   │ -p addons-468489 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-468489        │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │ 01 Nov 25 08:32 UTC │
	│ addons  │ addons-468489 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                            │ addons-468489        │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │ 01 Nov 25 08:32 UTC │
	│ addons  │ addons-468489 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-468489        │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │ 01 Nov 25 08:32 UTC │
	│ addons  │ enable headlamp -p addons-468489 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                │ addons-468489        │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │ 01 Nov 25 08:32 UTC │
	│ addons  │ addons-468489 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                               │ addons-468489        │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │ 01 Nov 25 08:32 UTC │
	│ addons  │ addons-468489 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-468489        │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │ 01 Nov 25 08:32 UTC │
	│ addons  │ addons-468489 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-468489        │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │ 01 Nov 25 08:32 UTC │
	│ addons  │ addons-468489 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-468489        │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │ 01 Nov 25 08:32 UTC │
	│ ip      │ addons-468489 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-468489        │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │ 01 Nov 25 08:32 UTC │
	│ addons  │ addons-468489 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                           │ addons-468489        │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │ 01 Nov 25 08:32 UTC │
	│ addons  │ addons-468489 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                   │ addons-468489        │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │ 01 Nov 25 08:32 UTC │
	│ ssh     │ addons-468489 ssh cat /opt/local-path-provisioner/pvc-cd2a8e6f-0b78-44b3-86d7-51ee5b835709_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                      │ addons-468489        │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │ 01 Nov 25 08:32 UTC │
	│ addons  │ addons-468489 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                        │ addons-468489        │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │ 01 Nov 25 08:33 UTC │
	│ ssh     │ addons-468489 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                               │ addons-468489        │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │                     │
	│ addons  │ addons-468489 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                               │ addons-468489        │ jenkins │ v1.37.0 │ 01 Nov 25 08:32 UTC │ 01 Nov 25 08:33 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-468489                                                                                                                                                                                                                                                                                                                                                                                         │ addons-468489        │ jenkins │ v1.37.0 │ 01 Nov 25 08:33 UTC │ 01 Nov 25 08:33 UTC │
	│ addons  │ addons-468489 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-468489        │ jenkins │ v1.37.0 │ 01 Nov 25 08:33 UTC │ 01 Nov 25 08:33 UTC │
	│ addons  │ addons-468489 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                    │ addons-468489        │ jenkins │ v1.37.0 │ 01 Nov 25 08:33 UTC │ 01 Nov 25 08:33 UTC │
	│ addons  │ addons-468489 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-468489        │ jenkins │ v1.37.0 │ 01 Nov 25 08:33 UTC │ 01 Nov 25 08:33 UTC │
	│ ip      │ addons-468489 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-468489        │ jenkins │ v1.37.0 │ 01 Nov 25 08:35 UTC │ 01 Nov 25 08:35 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 08:29:20
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 08:29:20.286995   10392 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:29:20.287203   10392 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:29:20.287228   10392 out.go:374] Setting ErrFile to fd 2...
	I1101 08:29:20.287232   10392 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:29:20.287423   10392 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5912/.minikube/bin
	I1101 08:29:20.287896   10392 out.go:368] Setting JSON to false
	I1101 08:29:20.288665   10392 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":707,"bootTime":1761985053,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 08:29:20.288746   10392 start.go:143] virtualization: kvm guest
	I1101 08:29:20.290763   10392 out.go:179] * [addons-468489] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 08:29:20.292291   10392 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 08:29:20.292293   10392 notify.go:221] Checking for updates...
	I1101 08:29:20.293780   10392 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 08:29:20.295052   10392 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-5912/kubeconfig
	I1101 08:29:20.296222   10392 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5912/.minikube
	I1101 08:29:20.297429   10392 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 08:29:20.298659   10392 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 08:29:20.300089   10392 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 08:29:20.330183   10392 out.go:179] * Using the kvm2 driver based on user configuration
	I1101 08:29:20.331333   10392 start.go:309] selected driver: kvm2
	I1101 08:29:20.331346   10392 start.go:930] validating driver "kvm2" against <nil>
	I1101 08:29:20.331363   10392 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 08:29:20.332047   10392 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 08:29:20.332265   10392 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 08:29:20.332289   10392 cni.go:84] Creating CNI manager for ""
	I1101 08:29:20.332327   10392 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 08:29:20.332333   10392 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1101 08:29:20.332367   10392 start.go:353] cluster config:
	{Name:addons-468489 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-468489 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1101 08:29:20.332451   10392 iso.go:125] acquiring lock: {Name:mk345092679db7c379cbaa00125c4f18e2b4a125 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 08:29:20.334503   10392 out.go:179] * Starting "addons-468489" primary control-plane node in "addons-468489" cluster
	I1101 08:29:20.335721   10392 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 08:29:20.335760   10392 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-5912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 08:29:20.335767   10392 cache.go:59] Caching tarball of preloaded images
	I1101 08:29:20.335862   10392 preload.go:233] Found /home/jenkins/minikube-integration/21835-5912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 08:29:20.335877   10392 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 08:29:20.336180   10392 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/config.json ...
	I1101 08:29:20.336202   10392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/config.json: {Name:mk8aca735bb3c1afb644bd37d8f027126ddf2db3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:29:20.336369   10392 start.go:360] acquireMachinesLock for addons-468489: {Name:mk8049b4e421873947dfa0bcd96201ccb1e1825c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 08:29:20.336431   10392 start.go:364] duration metric: took 44.799µs to acquireMachinesLock for "addons-468489"
	I1101 08:29:20.336456   10392 start.go:93] Provisioning new machine with config: &{Name:addons-468489 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:addons-468489 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 08:29:20.336509   10392 start.go:125] createHost starting for "" (driver="kvm2")
	I1101 08:29:20.338850   10392 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1101 08:29:20.339032   10392 start.go:159] libmachine.API.Create for "addons-468489" (driver="kvm2")
	I1101 08:29:20.339060   10392 client.go:173] LocalClient.Create starting
	I1101 08:29:20.339154   10392 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca.pem
	I1101 08:29:20.480018   10392 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/cert.pem
	I1101 08:29:20.765632   10392 main.go:143] libmachine: creating domain...
	I1101 08:29:20.765654   10392 main.go:143] libmachine: creating network...
	I1101 08:29:20.767140   10392 main.go:143] libmachine: found existing default network
	I1101 08:29:20.767388   10392 main.go:143] libmachine: <network>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 08:29:20.767960   10392 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d71bc0}
	I1101 08:29:20.768068   10392 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-addons-468489</name>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 08:29:20.775136   10392 main.go:143] libmachine: creating private network mk-addons-468489 192.168.39.0/24...
	I1101 08:29:20.844129   10392 main.go:143] libmachine: private network mk-addons-468489 192.168.39.0/24 created
	I1101 08:29:20.844470   10392 main.go:143] libmachine: <network>
	  <name>mk-addons-468489</name>
	  <uuid>2c1abea7-c4e7-4d53-b596-58a10f0d9c5f</uuid>
	  <bridge name='virbr1' stp='on' delay='0'/>
	  <mac address='52:54:00:9f:03:2c'/>
	  <dns enable='no'/>
	  <ip address='192.168.39.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.39.2' end='192.168.39.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 08:29:20.844497   10392 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489 ...
	I1101 08:29:20.844515   10392 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21835-5912/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso
	I1101 08:29:20.844525   10392 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21835-5912/.minikube
	I1101 08:29:20.844597   10392 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21835-5912/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21835-5912/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso...
	I1101 08:29:21.104690   10392 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/id_rsa...
	I1101 08:29:21.132681   10392 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/addons-468489.rawdisk...
	I1101 08:29:21.132719   10392 main.go:143] libmachine: Writing magic tar header
	I1101 08:29:21.132751   10392 main.go:143] libmachine: Writing SSH key tar header
	I1101 08:29:21.132825   10392 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489 ...
	I1101 08:29:21.132888   10392 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489
	I1101 08:29:21.132909   10392 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489 (perms=drwx------)
	I1101 08:29:21.132918   10392 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21835-5912/.minikube/machines
	I1101 08:29:21.132933   10392 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21835-5912/.minikube/machines (perms=drwxr-xr-x)
	I1101 08:29:21.132943   10392 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21835-5912/.minikube
	I1101 08:29:21.132954   10392 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21835-5912/.minikube (perms=drwxr-xr-x)
	I1101 08:29:21.132962   10392 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21835-5912
	I1101 08:29:21.132972   10392 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21835-5912 (perms=drwxrwxr-x)
	I1101 08:29:21.132982   10392 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1101 08:29:21.132996   10392 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1101 08:29:21.133006   10392 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1101 08:29:21.133013   10392 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1101 08:29:21.133026   10392 main.go:143] libmachine: checking permissions on dir: /home
	I1101 08:29:21.133042   10392 main.go:143] libmachine: skipping /home - not owner
	I1101 08:29:21.133049   10392 main.go:143] libmachine: defining domain...
	I1101 08:29:21.134135   10392 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>addons-468489</name>
	  <memory unit='MiB'>4096</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/addons-468489.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-addons-468489'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1101 08:29:21.142082   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:22:5c:91 in network default
	I1101 08:29:21.142637   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:29:21.142652   10392 main.go:143] libmachine: starting domain...
	I1101 08:29:21.142657   10392 main.go:143] libmachine: ensuring networks are active...
	I1101 08:29:21.143320   10392 main.go:143] libmachine: Ensuring network default is active
	I1101 08:29:21.143666   10392 main.go:143] libmachine: Ensuring network mk-addons-468489 is active
	I1101 08:29:21.144220   10392 main.go:143] libmachine: getting domain XML...
	I1101 08:29:21.145281   10392 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>addons-468489</name>
	  <uuid>83960230-6f48-4964-81c1-c1246eb542bd</uuid>
	  <memory unit='KiB'>4194304</memory>
	  <currentMemory unit='KiB'>4194304</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/addons-468489.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:d2:1b:e9'/>
	      <source network='mk-addons-468489'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:22:5c:91'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1101 08:29:22.435316   10392 main.go:143] libmachine: waiting for domain to start...
	I1101 08:29:22.436615   10392 main.go:143] libmachine: domain is now running
	I1101 08:29:22.436629   10392 main.go:143] libmachine: waiting for IP...
	I1101 08:29:22.437441   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:29:22.437817   10392 main.go:143] libmachine: no network interface addresses found for domain addons-468489 (source=lease)
	I1101 08:29:22.437829   10392 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:29:22.438067   10392 main.go:143] libmachine: unable to find current IP address of domain addons-468489 in network mk-addons-468489 (interfaces detected: [])
	I1101 08:29:22.438112   10392 retry.go:31] will retry after 230.239695ms: waiting for domain to come up
	I1101 08:29:22.669584   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:29:22.670269   10392 main.go:143] libmachine: no network interface addresses found for domain addons-468489 (source=lease)
	I1101 08:29:22.670286   10392 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:29:22.670629   10392 main.go:143] libmachine: unable to find current IP address of domain addons-468489 in network mk-addons-468489 (interfaces detected: [])
	I1101 08:29:22.670661   10392 retry.go:31] will retry after 360.113061ms: waiting for domain to come up
	I1101 08:29:23.032146   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:29:23.032685   10392 main.go:143] libmachine: no network interface addresses found for domain addons-468489 (source=lease)
	I1101 08:29:23.032706   10392 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:29:23.032997   10392 main.go:143] libmachine: unable to find current IP address of domain addons-468489 in network mk-addons-468489 (interfaces detected: [])
	I1101 08:29:23.033033   10392 retry.go:31] will retry after 478.271754ms: waiting for domain to come up
	I1101 08:29:23.512730   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:29:23.513331   10392 main.go:143] libmachine: no network interface addresses found for domain addons-468489 (source=lease)
	I1101 08:29:23.513347   10392 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:29:23.513620   10392 main.go:143] libmachine: unable to find current IP address of domain addons-468489 in network mk-addons-468489 (interfaces detected: [])
	I1101 08:29:23.513650   10392 retry.go:31] will retry after 510.18084ms: waiting for domain to come up
	I1101 08:29:24.025380   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:29:24.026030   10392 main.go:143] libmachine: no network interface addresses found for domain addons-468489 (source=lease)
	I1101 08:29:24.026050   10392 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:29:24.026345   10392 main.go:143] libmachine: unable to find current IP address of domain addons-468489 in network mk-addons-468489 (interfaces detected: [])
	I1101 08:29:24.026381   10392 retry.go:31] will retry after 643.490483ms: waiting for domain to come up
	I1101 08:29:24.671129   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:29:24.671756   10392 main.go:143] libmachine: no network interface addresses found for domain addons-468489 (source=lease)
	I1101 08:29:24.671770   10392 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:29:24.672067   10392 main.go:143] libmachine: unable to find current IP address of domain addons-468489 in network mk-addons-468489 (interfaces detected: [])
	I1101 08:29:24.672101   10392 retry.go:31] will retry after 894.911325ms: waiting for domain to come up
	I1101 08:29:25.569148   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:29:25.569687   10392 main.go:143] libmachine: no network interface addresses found for domain addons-468489 (source=lease)
	I1101 08:29:25.569708   10392 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:29:25.569976   10392 main.go:143] libmachine: unable to find current IP address of domain addons-468489 in network mk-addons-468489 (interfaces detected: [])
	I1101 08:29:25.570007   10392 retry.go:31] will retry after 937.8264ms: waiting for domain to come up
	I1101 08:29:26.509104   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:29:26.509661   10392 main.go:143] libmachine: no network interface addresses found for domain addons-468489 (source=lease)
	I1101 08:29:26.509682   10392 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:29:26.509970   10392 main.go:143] libmachine: unable to find current IP address of domain addons-468489 in network mk-addons-468489 (interfaces detected: [])
	I1101 08:29:26.510022   10392 retry.go:31] will retry after 1.30157764s: waiting for domain to come up
	I1101 08:29:27.813547   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:29:27.814079   10392 main.go:143] libmachine: no network interface addresses found for domain addons-468489 (source=lease)
	I1101 08:29:27.814095   10392 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:29:27.814436   10392 main.go:143] libmachine: unable to find current IP address of domain addons-468489 in network mk-addons-468489 (interfaces detected: [])
	I1101 08:29:27.814467   10392 retry.go:31] will retry after 1.622542541s: waiting for domain to come up
	I1101 08:29:29.439367   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:29:29.439872   10392 main.go:143] libmachine: no network interface addresses found for domain addons-468489 (source=lease)
	I1101 08:29:29.439891   10392 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:29:29.440234   10392 main.go:143] libmachine: unable to find current IP address of domain addons-468489 in network mk-addons-468489 (interfaces detected: [])
	I1101 08:29:29.440272   10392 retry.go:31] will retry after 2.021531153s: waiting for domain to come up
	I1101 08:29:31.463955   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:29:31.464618   10392 main.go:143] libmachine: no network interface addresses found for domain addons-468489 (source=lease)
	I1101 08:29:31.464642   10392 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:29:31.465011   10392 main.go:143] libmachine: unable to find current IP address of domain addons-468489 in network mk-addons-468489 (interfaces detected: [])
	I1101 08:29:31.465053   10392 retry.go:31] will retry after 2.339644955s: waiting for domain to come up
	I1101 08:29:33.806067   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:29:33.806833   10392 main.go:143] libmachine: no network interface addresses found for domain addons-468489 (source=lease)
	I1101 08:29:33.806855   10392 main.go:143] libmachine: trying to list again with source=arp
	I1101 08:29:33.807111   10392 main.go:143] libmachine: unable to find current IP address of domain addons-468489 in network mk-addons-468489 (interfaces detected: [])
	I1101 08:29:33.807141   10392 retry.go:31] will retry after 3.305590216s: waiting for domain to come up
	I1101 08:29:37.115736   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:29:37.116391   10392 main.go:143] libmachine: domain addons-468489 has current primary IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:29:37.116412   10392 main.go:143] libmachine: found domain IP: 192.168.39.108
	I1101 08:29:37.116419   10392 main.go:143] libmachine: reserving static IP address...
	I1101 08:29:37.116848   10392 main.go:143] libmachine: unable to find host DHCP lease matching {name: "addons-468489", mac: "52:54:00:d2:1b:e9", ip: "192.168.39.108"} in network mk-addons-468489
	I1101 08:29:37.313092   10392 main.go:143] libmachine: reserved static IP address 192.168.39.108 for domain addons-468489
	I1101 08:29:37.313114   10392 main.go:143] libmachine: waiting for SSH...
	I1101 08:29:37.313120   10392 main.go:143] libmachine: Getting to WaitForSSH function...
	I1101 08:29:37.315925   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:29:37.316349   10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d2:1b:e9}
	I1101 08:29:37.316375   10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:29:37.316562   10392 main.go:143] libmachine: Using SSH client type: native
	I1101 08:29:37.316772   10392 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I1101 08:29:37.316783   10392 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1101 08:29:37.428897   10392 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 08:29:37.429266   10392 main.go:143] libmachine: domain creation complete
	I1101 08:29:37.431023   10392 machine.go:94] provisionDockerMachine start ...
	I1101 08:29:37.433509   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:29:37.433944   10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
	I1101 08:29:37.433967   10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:29:37.434170   10392 main.go:143] libmachine: Using SSH client type: native
	I1101 08:29:37.434417   10392 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I1101 08:29:37.434433   10392 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 08:29:37.544818   10392 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1101 08:29:37.544847   10392 buildroot.go:166] provisioning hostname "addons-468489"
	I1101 08:29:37.547777   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:29:37.548177   10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
	I1101 08:29:37.548220   10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:29:37.548366   10392 main.go:143] libmachine: Using SSH client type: native
	I1101 08:29:37.548544   10392 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I1101 08:29:37.548555   10392 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-468489 && echo "addons-468489" | sudo tee /etc/hostname
	I1101 08:29:37.675552   10392 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-468489
	
	I1101 08:29:37.678466   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:29:37.678902   10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
	I1101 08:29:37.678947   10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:29:37.679148   10392 main.go:143] libmachine: Using SSH client type: native
	I1101 08:29:37.679400   10392 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I1101 08:29:37.679422   10392 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-468489' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-468489/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-468489' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 08:29:37.798874   10392 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 08:29:37.798901   10392 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21835-5912/.minikube CaCertPath:/home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21835-5912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21835-5912/.minikube}
	I1101 08:29:37.798917   10392 buildroot.go:174] setting up certificates
	I1101 08:29:37.798924   10392 provision.go:84] configureAuth start
	I1101 08:29:37.801786   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:29:37.802256   10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
	I1101 08:29:37.802280   10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:29:37.804669   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:29:37.805022   10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
	I1101 08:29:37.805045   10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:29:37.805202   10392 provision.go:143] copyHostCerts
	I1101 08:29:37.805291   10392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21835-5912/.minikube/ca.pem (1082 bytes)
	I1101 08:29:37.805432   10392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21835-5912/.minikube/cert.pem (1123 bytes)
	I1101 08:29:37.805695   10392 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21835-5912/.minikube/key.pem (1679 bytes)
	I1101 08:29:37.805865   10392 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21835-5912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca-key.pem org=jenkins.addons-468489 san=[127.0.0.1 192.168.39.108 addons-468489 localhost minikube]
	I1101 08:29:38.026554   10392 provision.go:177] copyRemoteCerts
	I1101 08:29:38.026609   10392 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 08:29:38.029015   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:29:38.029389   10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
	I1101 08:29:38.029409   10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:29:38.029539   10392 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/id_rsa Username:docker}
	I1101 08:29:38.119168   10392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 08:29:38.148612   10392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 08:29:38.181367   10392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1101 08:29:38.210051   10392 provision.go:87] duration metric: took 411.113175ms to configureAuth
	I1101 08:29:38.210083   10392 buildroot.go:189] setting minikube options for container-runtime
	I1101 08:29:38.210304   10392 config.go:182] Loaded profile config "addons-468489": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:29:38.212821   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:29:38.213190   10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
	I1101 08:29:38.213234   10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:29:38.213409   10392 main.go:143] libmachine: Using SSH client type: native
	I1101 08:29:38.213586   10392 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I1101 08:29:38.213599   10392 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 08:29:38.462120   10392 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 08:29:38.462145   10392 machine.go:97] duration metric: took 1.031102449s to provisionDockerMachine
	I1101 08:29:38.462154   10392 client.go:176] duration metric: took 18.123088465s to LocalClient.Create
	I1101 08:29:38.462169   10392 start.go:167] duration metric: took 18.12313635s to libmachine.API.Create "addons-468489"
	I1101 08:29:38.462175   10392 start.go:293] postStartSetup for "addons-468489" (driver="kvm2")
	I1101 08:29:38.462184   10392 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 08:29:38.462270   10392 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 08:29:38.465106   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:29:38.465457   10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
	I1101 08:29:38.465479   10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:29:38.465618   10392 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/id_rsa Username:docker}
	I1101 08:29:38.550379   10392 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 08:29:38.555199   10392 info.go:137] Remote host: Buildroot 2025.02
	I1101 08:29:38.555256   10392 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5912/.minikube/addons for local assets ...
	I1101 08:29:38.555331   10392 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5912/.minikube/files for local assets ...
	I1101 08:29:38.555367   10392 start.go:296] duration metric: took 93.187011ms for postStartSetup
	I1101 08:29:38.558643   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:29:38.559097   10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
	I1101 08:29:38.559123   10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:29:38.559387   10392 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/config.json ...
	I1101 08:29:38.559588   10392 start.go:128] duration metric: took 18.223068675s to createHost
	I1101 08:29:38.561668   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:29:38.562140   10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
	I1101 08:29:38.562163   10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:29:38.562347   10392 main.go:143] libmachine: Using SSH client type: native
	I1101 08:29:38.562552   10392 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I1101 08:29:38.562566   10392 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1101 08:29:38.675444   10392 main.go:143] libmachine: SSH cmd err, output: <nil>: 1761985778.638239831
	
	I1101 08:29:38.675465   10392 fix.go:216] guest clock: 1761985778.638239831
	I1101 08:29:38.675471   10392 fix.go:229] Guest: 2025-11-01 08:29:38.638239831 +0000 UTC Remote: 2025-11-01 08:29:38.559601036 +0000 UTC m=+18.319532512 (delta=78.638795ms)
	I1101 08:29:38.675485   10392 fix.go:200] guest clock delta is within tolerance: 78.638795ms
	I1101 08:29:38.675489   10392 start.go:83] releasing machines lock for "addons-468489", held for 18.339046917s
	I1101 08:29:38.678475   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:29:38.678851   10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
	I1101 08:29:38.678874   10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:29:38.679453   10392 ssh_runner.go:195] Run: cat /version.json
	I1101 08:29:38.679525   10392 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 08:29:38.682468   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:29:38.682767   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:29:38.682885   10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
	I1101 08:29:38.682918   10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:29:38.683055   10392 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/id_rsa Username:docker}
	I1101 08:29:38.683303   10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
	I1101 08:29:38.683331   10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:29:38.683507   10392 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/id_rsa Username:docker}
	I1101 08:29:38.761498   10392 ssh_runner.go:195] Run: systemctl --version
	I1101 08:29:38.790731   10392 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 08:29:38.947068   10392 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 08:29:38.954488   10392 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 08:29:38.954559   10392 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 08:29:38.975006   10392 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 08:29:38.975031   10392 start.go:496] detecting cgroup driver to use...
	I1101 08:29:38.975097   10392 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 08:29:38.994654   10392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 08:29:39.011254   10392 docker.go:218] disabling cri-docker service (if available) ...
	I1101 08:29:39.011312   10392 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 08:29:39.029045   10392 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 08:29:39.045408   10392 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 08:29:39.197939   10392 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 08:29:39.406576   10392 docker.go:234] disabling docker service ...
	I1101 08:29:39.406644   10392 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 08:29:39.422971   10392 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 08:29:39.437865   10392 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 08:29:39.592931   10392 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 08:29:39.737448   10392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 08:29:39.752725   10392 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 08:29:39.775074   10392 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 08:29:39.775137   10392 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:29:39.786920   10392 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 08:29:39.786976   10392 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:29:39.798917   10392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:29:39.810958   10392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:29:39.823421   10392 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 08:29:39.836640   10392 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:29:39.849068   10392 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:29:39.869819   10392 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:29:39.882015   10392 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 08:29:39.892351   10392 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 08:29:39.892415   10392 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 08:29:39.912167   10392 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 08:29:39.923460   10392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 08:29:40.057456   10392 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 08:29:40.173283   10392 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 08:29:40.173371   10392 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 08:29:40.179118   10392 start.go:564] Will wait 60s for crictl version
	I1101 08:29:40.179201   10392 ssh_runner.go:195] Run: which crictl
	I1101 08:29:40.183607   10392 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 08:29:40.228592   10392 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1101 08:29:40.228701   10392 ssh_runner.go:195] Run: crio --version
	I1101 08:29:40.257840   10392 ssh_runner.go:195] Run: crio --version
	I1101 08:29:40.289257   10392 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1101 08:29:40.293356   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:29:40.293795   10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
	I1101 08:29:40.293822   10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:29:40.294048   10392 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1101 08:29:40.299049   10392 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 08:29:40.314560   10392 kubeadm.go:884] updating cluster {Name:addons-468489 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.
1 ClusterName:addons-468489 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.108 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 08:29:40.314743   10392 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 08:29:40.314810   10392 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 08:29:40.350006   10392 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1101 08:29:40.350083   10392 ssh_runner.go:195] Run: which lz4
	I1101 08:29:40.354288   10392 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1101 08:29:40.359059   10392 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 08:29:40.359093   10392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1101 08:29:41.695331   10392 crio.go:462] duration metric: took 1.34107457s to copy over tarball
	I1101 08:29:41.695402   10392 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 08:29:43.298883   10392 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.603455135s)
	I1101 08:29:43.298908   10392 crio.go:469] duration metric: took 1.603548837s to extract the tarball
	I1101 08:29:43.298916   10392 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 08:29:43.339359   10392 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 08:29:43.384229   10392 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 08:29:43.384249   10392 cache_images.go:86] Images are preloaded, skipping loading
	I1101 08:29:43.384256   10392 kubeadm.go:935] updating node { 192.168.39.108 8443 v1.34.1 crio true true} ...
	I1101 08:29:43.384330   10392 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-468489 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.108
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-468489 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 08:29:43.384389   10392 ssh_runner.go:195] Run: crio config
	I1101 08:29:43.433182   10392 cni.go:84] Creating CNI manager for ""
	I1101 08:29:43.433219   10392 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 08:29:43.433236   10392 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 08:29:43.433260   10392 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.108 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-468489 NodeName:addons-468489 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.108"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.108 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 08:29:43.433391   10392 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.108
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-468489"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.108"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.108"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 08:29:43.433459   10392 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 08:29:43.445703   10392 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 08:29:43.445772   10392 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 08:29:43.457247   10392 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1101 08:29:43.478048   10392 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 08:29:43.498719   10392 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1101 08:29:43.519056   10392 ssh_runner.go:195] Run: grep 192.168.39.108	control-plane.minikube.internal$ /etc/hosts
	I1101 08:29:43.523136   10392 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.108	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 08:29:43.537796   10392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 08:29:43.679728   10392 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 08:29:43.699695   10392 certs.go:69] Setting up /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489 for IP: 192.168.39.108
	I1101 08:29:43.699717   10392 certs.go:195] generating shared ca certs ...
	I1101 08:29:43.699731   10392 certs.go:227] acquiring lock for ca certs: {Name:mk23a33d19209ad24f4406326ada43ab5cb57960 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:29:43.699863   10392 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21835-5912/.minikube/ca.key
	I1101 08:29:43.978072   10392 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-5912/.minikube/ca.crt ...
	I1101 08:29:43.978096   10392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5912/.minikube/ca.crt: {Name:mk310d4ddeb698380ce931511e46a2949bc078d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:29:43.978262   10392 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-5912/.minikube/ca.key ...
	I1101 08:29:43.978273   10392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5912/.minikube/ca.key: {Name:mk98b96a94ed9005e8095fef7c6d586931f7a99a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:29:43.978342   10392 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21835-5912/.minikube/proxy-client-ca.key
	I1101 08:29:44.369174   10392 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-5912/.minikube/proxy-client-ca.crt ...
	I1101 08:29:44.369202   10392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5912/.minikube/proxy-client-ca.crt: {Name:mk8f8f4e72899c75d3a00be809552850e4649e32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:29:44.369365   10392 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-5912/.minikube/proxy-client-ca.key ...
	I1101 08:29:44.369395   10392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5912/.minikube/proxy-client-ca.key: {Name:mke12f3aff84934fd9656eefdf4c90c69a503a0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:29:44.369475   10392 certs.go:257] generating profile certs ...
	I1101 08:29:44.369525   10392 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/client.key
	I1101 08:29:44.369540   10392 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/client.crt with IP's: []
	I1101 08:29:44.567097   10392 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/client.crt ...
	I1101 08:29:44.567124   10392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/client.crt: {Name:mk6e6a0ab62c910983eeeceec962694b326a21fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:29:44.567280   10392 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/client.key ...
	I1101 08:29:44.567292   10392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/client.key: {Name:mk74ca204ee8d1bdf9d5821b71407334c1b75417 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:29:44.567357   10392 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/apiserver.key.82d38449
	I1101 08:29:44.567375   10392 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/apiserver.crt.82d38449 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.108]
	I1101 08:29:45.224964   10392 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/apiserver.crt.82d38449 ...
	I1101 08:29:45.224993   10392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/apiserver.crt.82d38449: {Name:mkf4fb16c89192136e38e71006122bca1a9554cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:29:45.225153   10392 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/apiserver.key.82d38449 ...
	I1101 08:29:45.225167   10392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/apiserver.key.82d38449: {Name:mk52cae8fb5fdc76f8f437013deea9cd816faf69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:29:45.225250   10392 certs.go:382] copying /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/apiserver.crt.82d38449 -> /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/apiserver.crt
	I1101 08:29:45.225767   10392 certs.go:386] copying /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/apiserver.key.82d38449 -> /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/apiserver.key
	I1101 08:29:45.225839   10392 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/proxy-client.key
	I1101 08:29:45.225859   10392 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/proxy-client.crt with IP's: []
	I1101 08:29:45.835858   10392 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/proxy-client.crt ...
	I1101 08:29:45.835885   10392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/proxy-client.crt: {Name:mke920dc8e6c8530147466fc91ae1c4a1614912c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:29:45.836045   10392 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/proxy-client.key ...
	I1101 08:29:45.836057   10392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/proxy-client.key: {Name:mkd0b4181fb007ecb32bee7ac450c0b01527b072 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:29:45.836245   10392 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 08:29:45.836278   10392 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca.pem (1082 bytes)
	I1101 08:29:45.836297   10392 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/cert.pem (1123 bytes)
	I1101 08:29:45.836314   10392 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/key.pem (1679 bytes)
	I1101 08:29:45.836835   10392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 08:29:45.868558   10392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 08:29:45.899360   10392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 08:29:45.929990   10392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 08:29:45.961324   10392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1101 08:29:45.991070   10392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 08:29:46.021154   10392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 08:29:46.055649   10392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 08:29:46.090762   10392 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 08:29:46.123025   10392 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 08:29:46.146229   10392 ssh_runner.go:195] Run: openssl version
	I1101 08:29:46.153115   10392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 08:29:46.167493   10392 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 08:29:46.172725   10392 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 08:29:46.172798   10392 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 08:29:46.180474   10392 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 08:29:46.193728   10392 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 08:29:46.198816   10392 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 08:29:46.198876   10392 kubeadm.go:401] StartCluster: {Name:addons-468489 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 C
lusterName:addons-468489 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.108 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 08:29:46.198953   10392 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:29:46.199114   10392 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:29:46.239692   10392 cri.go:89] found id: ""
	I1101 08:29:46.239762   10392 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 08:29:46.254038   10392 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 08:29:46.266725   10392 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 08:29:46.287360   10392 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 08:29:46.287378   10392 kubeadm.go:158] found existing configuration files:
	
	I1101 08:29:46.287443   10392 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 08:29:46.299161   10392 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 08:29:46.299260   10392 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 08:29:46.319559   10392 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 08:29:46.331521   10392 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 08:29:46.331572   10392 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 08:29:46.343586   10392 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 08:29:46.354924   10392 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 08:29:46.354986   10392 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 08:29:46.366696   10392 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 08:29:46.377956   10392 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 08:29:46.378028   10392 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 08:29:46.389964   10392 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1101 08:29:46.551041   10392 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 08:29:58.344700   10392 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 08:29:58.344770   10392 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 08:29:58.344852   10392 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 08:29:58.344959   10392 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 08:29:58.345093   10392 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 08:29:58.345225   10392 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 08:29:58.347031   10392 out.go:252]   - Generating certificates and keys ...
	I1101 08:29:58.347147   10392 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 08:29:58.347329   10392 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 08:29:58.347442   10392 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 08:29:58.347512   10392 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 08:29:58.347581   10392 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 08:29:58.347661   10392 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 08:29:58.347711   10392 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 08:29:58.347872   10392 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-468489 localhost] and IPs [192.168.39.108 127.0.0.1 ::1]
	I1101 08:29:58.347923   10392 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 08:29:58.348094   10392 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-468489 localhost] and IPs [192.168.39.108 127.0.0.1 ::1]
	I1101 08:29:58.348191   10392 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 08:29:58.348291   10392 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 08:29:58.348338   10392 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 08:29:58.348417   10392 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 08:29:58.348485   10392 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 08:29:58.348570   10392 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 08:29:58.348656   10392 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 08:29:58.348773   10392 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 08:29:58.348854   10392 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 08:29:58.348969   10392 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 08:29:58.349095   10392 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 08:29:58.351501   10392 out.go:252]   - Booting up control plane ...
	I1101 08:29:58.351595   10392 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 08:29:58.351681   10392 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 08:29:58.351759   10392 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 08:29:58.351896   10392 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 08:29:58.352052   10392 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 08:29:58.352202   10392 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 08:29:58.352395   10392 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 08:29:58.352451   10392 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 08:29:58.352626   10392 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 08:29:58.352737   10392 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 08:29:58.352811   10392 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 524.422641ms
	I1101 08:29:58.352889   10392 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 08:29:58.352982   10392 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.108:8443/livez
	I1101 08:29:58.353111   10392 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 08:29:58.353242   10392 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 08:29:58.353359   10392 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.568998866s
	I1101 08:29:58.353430   10392 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.72911267s
	I1101 08:29:58.353502   10392 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.502686455s
	I1101 08:29:58.353657   10392 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 08:29:58.353842   10392 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 08:29:58.353937   10392 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 08:29:58.354136   10392 kubeadm.go:319] [mark-control-plane] Marking the node addons-468489 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 08:29:58.354251   10392 kubeadm.go:319] [bootstrap-token] Using token: 3eegde.22eo73t8801ax86h
	I1101 08:29:58.356430   10392 out.go:252]   - Configuring RBAC rules ...
	I1101 08:29:58.356512   10392 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 08:29:58.356608   10392 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 08:29:58.356782   10392 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 08:29:58.356953   10392 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 08:29:58.357062   10392 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 08:29:58.357151   10392 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 08:29:58.357312   10392 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 08:29:58.357381   10392 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 08:29:58.357422   10392 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 08:29:58.357428   10392 kubeadm.go:319] 
	I1101 08:29:58.357474   10392 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 08:29:58.357480   10392 kubeadm.go:319] 
	I1101 08:29:58.357585   10392 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 08:29:58.357596   10392 kubeadm.go:319] 
	I1101 08:29:58.357635   10392 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 08:29:58.357722   10392 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 08:29:58.357787   10392 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 08:29:58.357793   10392 kubeadm.go:319] 
	I1101 08:29:58.357834   10392 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 08:29:58.357839   10392 kubeadm.go:319] 
	I1101 08:29:58.357908   10392 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 08:29:58.357922   10392 kubeadm.go:319] 
	I1101 08:29:58.357996   10392 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 08:29:58.358099   10392 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 08:29:58.358192   10392 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 08:29:58.358200   10392 kubeadm.go:319] 
	I1101 08:29:58.358318   10392 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 08:29:58.358385   10392 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 08:29:58.358391   10392 kubeadm.go:319] 
	I1101 08:29:58.358454   10392 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 3eegde.22eo73t8801ax86h \
	I1101 08:29:58.358679   10392 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a5abe2adb0c939d52fba184971121a4379087a8fcf67d55f536fc49608a1d330 \
	I1101 08:29:58.358709   10392 kubeadm.go:319] 	--control-plane 
	I1101 08:29:58.358715   10392 kubeadm.go:319] 
	I1101 08:29:58.358853   10392 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 08:29:58.358864   10392 kubeadm.go:319] 
	I1101 08:29:58.358979   10392 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 3eegde.22eo73t8801ax86h \
	I1101 08:29:58.359139   10392 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a5abe2adb0c939d52fba184971121a4379087a8fcf67d55f536fc49608a1d330 
	I1101 08:29:58.359168   10392 cni.go:84] Creating CNI manager for ""
	I1101 08:29:58.359181   10392 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 08:29:58.360947   10392 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 08:29:58.362246   10392 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 08:29:58.375945   10392 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1101 08:29:58.401242   10392 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 08:29:58.401349   10392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:29:58.401364   10392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-468489 minikube.k8s.io/updated_at=2025_11_01T08_29_58_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192 minikube.k8s.io/name=addons-468489 minikube.k8s.io/primary=true
	I1101 08:29:58.564646   10392 ops.go:34] apiserver oom_adj: -16
	I1101 08:29:58.564756   10392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:29:59.065168   10392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:29:59.565725   10392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:30:00.065630   10392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:30:00.565574   10392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:30:01.065765   10392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:30:01.565866   10392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:30:02.065122   10392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:30:02.565602   10392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:30:03.065166   10392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:30:03.163895   10392 kubeadm.go:1114] duration metric: took 4.762618499s to wait for elevateKubeSystemPrivileges
	I1101 08:30:03.163935   10392 kubeadm.go:403] duration metric: took 16.965062697s to StartCluster
	I1101 08:30:03.163956   10392 settings.go:142] acquiring lock: {Name:mk818d33e162ca33774e3ab05f6aac30f8feaf64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:30:03.164097   10392 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21835-5912/kubeconfig
	I1101 08:30:03.164629   10392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5912/kubeconfig: {Name:mk599bec02e6b7062c3926243176124a4bc71dfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:30:03.164872   10392 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 08:30:03.164883   10392 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.108 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 08:30:03.164947   10392 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1101 08:30:03.165071   10392 addons.go:70] Setting yakd=true in profile "addons-468489"
	I1101 08:30:03.165090   10392 config.go:182] Loaded profile config "addons-468489": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:30:03.165094   10392 addons.go:70] Setting ingress=true in profile "addons-468489"
	I1101 08:30:03.165112   10392 addons.go:70] Setting ingress-dns=true in profile "addons-468489"
	I1101 08:30:03.165092   10392 addons.go:239] Setting addon yakd=true in "addons-468489"
	I1101 08:30:03.165132   10392 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-468489"
	I1101 08:30:03.165161   10392 addons.go:70] Setting registry-creds=true in profile "addons-468489"
	I1101 08:30:03.165162   10392 addons.go:70] Setting default-storageclass=true in profile "addons-468489"
	I1101 08:30:03.165179   10392 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-468489"
	I1101 08:30:03.165192   10392 addons.go:239] Setting addon ingress=true in "addons-468489"
	I1101 08:30:03.165194   10392 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-468489"
	I1101 08:30:03.165249   10392 host.go:66] Checking if "addons-468489" exists ...
	I1101 08:30:03.165266   10392 addons.go:70] Setting metrics-server=true in profile "addons-468489"
	I1101 08:30:03.165283   10392 addons.go:239] Setting addon metrics-server=true in "addons-468489"
	I1101 08:30:03.165307   10392 host.go:66] Checking if "addons-468489" exists ...
	I1101 08:30:03.165379   10392 addons.go:70] Setting inspektor-gadget=true in profile "addons-468489"
	I1101 08:30:03.165403   10392 addons.go:239] Setting addon inspektor-gadget=true in "addons-468489"
	I1101 08:30:03.165440   10392 host.go:66] Checking if "addons-468489" exists ...
	I1101 08:30:03.165610   10392 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-468489"
	I1101 08:30:03.165654   10392 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-468489"
	I1101 08:30:03.165676   10392 host.go:66] Checking if "addons-468489" exists ...
	I1101 08:30:03.165151   10392 host.go:66] Checking if "addons-468489" exists ...
	I1101 08:30:03.165721   10392 addons.go:239] Setting addon registry-creds=true in "addons-468489"
	I1101 08:30:03.165746   10392 host.go:66] Checking if "addons-468489" exists ...
	I1101 08:30:03.166130   10392 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-468489"
	I1101 08:30:03.166175   10392 host.go:66] Checking if "addons-468489" exists ...
	I1101 08:30:03.165251   10392 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-468489"
	I1101 08:30:03.166446   10392 addons.go:70] Setting gcp-auth=true in profile "addons-468489"
	I1101 08:30:03.166461   10392 host.go:66] Checking if "addons-468489" exists ...
	I1101 08:30:03.166467   10392 mustload.go:66] Loading cluster: addons-468489
	I1101 08:30:03.166487   10392 addons.go:239] Setting addon ingress-dns=true in "addons-468489"
	I1101 08:30:03.166534   10392 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-468489"
	I1101 08:30:03.166551   10392 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-468489"
	I1101 08:30:03.166579   10392 host.go:66] Checking if "addons-468489" exists ...
	I1101 08:30:03.166656   10392 config.go:182] Loaded profile config "addons-468489": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:30:03.167166   10392 addons.go:70] Setting storage-provisioner=true in profile "addons-468489"
	I1101 08:30:03.167323   10392 addons.go:239] Setting addon storage-provisioner=true in "addons-468489"
	I1101 08:30:03.167354   10392 host.go:66] Checking if "addons-468489" exists ...
	I1101 08:30:03.167196   10392 addons.go:70] Setting registry=true in profile "addons-468489"
	I1101 08:30:03.167413   10392 addons.go:239] Setting addon registry=true in "addons-468489"
	I1101 08:30:03.167434   10392 host.go:66] Checking if "addons-468489" exists ...
	I1101 08:30:03.167229   10392 addons.go:70] Setting volcano=true in profile "addons-468489"
	I1101 08:30:03.167501   10392 addons.go:239] Setting addon volcano=true in "addons-468489"
	I1101 08:30:03.167547   10392 host.go:66] Checking if "addons-468489" exists ...
	I1101 08:30:03.167239   10392 addons.go:70] Setting volumesnapshots=true in profile "addons-468489"
	I1101 08:30:03.168066   10392 addons.go:239] Setting addon volumesnapshots=true in "addons-468489"
	I1101 08:30:03.168092   10392 host.go:66] Checking if "addons-468489" exists ...
	I1101 08:30:03.165140   10392 addons.go:70] Setting cloud-spanner=true in profile "addons-468489"
	I1101 08:30:03.168350   10392 addons.go:239] Setting addon cloud-spanner=true in "addons-468489"
	I1101 08:30:03.168382   10392 host.go:66] Checking if "addons-468489" exists ...
	I1101 08:30:03.168513   10392 out.go:179] * Verifying Kubernetes components...
	I1101 08:30:03.170001   10392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 08:30:03.174255   10392 addons.go:239] Setting addon default-storageclass=true in "addons-468489"
	I1101 08:30:03.174300   10392 host.go:66] Checking if "addons-468489" exists ...
	I1101 08:30:03.174978   10392 host.go:66] Checking if "addons-468489" exists ...
	I1101 08:30:03.175135   10392 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-468489"
	I1101 08:30:03.175165   10392 host.go:66] Checking if "addons-468489" exists ...
	I1101 08:30:03.175387   10392 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1101 08:30:03.175450   10392 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1101 08:30:03.175461   10392 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1101 08:30:03.175464   10392 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1101 08:30:03.176330   10392 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1101 08:30:03.175498   10392 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1101 08:30:03.175633   10392 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	W1101 08:30:03.176083   10392 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1101 08:30:03.175481   10392 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1101 08:30:03.177077   10392 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 08:30:03.177559   10392 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 08:30:03.178045   10392 addons.go:436] installing /etc/kubernetes/addons/ig-crd.yaml
	I1101 08:30:03.178067   10392 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1101 08:30:03.177469   10392 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 08:30:03.178101   10392 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 08:30:03.178856   10392 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1101 08:30:03.178870   10392 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1101 08:30:03.179305   10392 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1101 08:30:03.178914   10392 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 08:30:03.179459   10392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1101 08:30:03.179700   10392 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 08:30:03.179703   10392 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 08:30:03.179746   10392 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1101 08:30:03.179719   10392 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 08:30:03.180226   10392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1101 08:30:03.179719   10392 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 08:30:03.180327   10392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1101 08:30:03.179765   10392 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1101 08:30:03.179770   10392 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1101 08:30:03.179792   10392 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1101 08:30:03.181690   10392 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1101 08:30:03.181744   10392 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 08:30:03.182259   10392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 08:30:03.181783   10392 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1101 08:30:03.182342   10392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1101 08:30:03.182536   10392 out.go:179]   - Using image docker.io/registry:3.0.0
	I1101 08:30:03.182592   10392 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1101 08:30:03.182937   10392 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1101 08:30:03.182806   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:30:03.183351   10392 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 08:30:03.183354   10392 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 08:30:03.183830   10392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1101 08:30:03.184052   10392 out.go:179]   - Using image docker.io/busybox:stable
	I1101 08:30:03.184103   10392 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1101 08:30:03.184408   10392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1101 08:30:03.184574   10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
	I1101 08:30:03.184604   10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:30:03.184804   10392 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1101 08:30:03.184916   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:30:03.184956   10392 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 08:30:03.185148   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:30:03.185150   10392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1101 08:30:03.185381   10392 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/id_rsa Username:docker}
	I1101 08:30:03.185557   10392 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 08:30:03.185572   10392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1101 08:30:03.186708   10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
	I1101 08:30:03.186726   10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
	I1101 08:30:03.186749   10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:30:03.186748   10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:30:03.187326   10392 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/id_rsa Username:docker}
	I1101 08:30:03.187326   10392 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/id_rsa Username:docker}
	I1101 08:30:03.187606   10392 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1101 08:30:03.188436   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:30:03.189256   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:30:03.189593   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:30:03.190101   10392 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1101 08:30:03.190237   10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
	I1101 08:30:03.190274   10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:30:03.190682   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:30:03.190716   10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
	I1101 08:30:03.190748   10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:30:03.191057   10392 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/id_rsa Username:docker}
	I1101 08:30:03.191235   10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
	I1101 08:30:03.191270   10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:30:03.191743   10392 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/id_rsa Username:docker}
	I1101 08:30:03.192521   10392 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1101 08:30:03.192609   10392 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/id_rsa Username:docker}
	I1101 08:30:03.192726   10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
	I1101 08:30:03.192754   10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:30:03.193441   10392 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/id_rsa Username:docker}
	I1101 08:30:03.193500   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:30:03.194226   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:30:03.194684   10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
	I1101 08:30:03.194709   10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:30:03.194794   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:30:03.195019   10392 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1101 08:30:03.195020   10392 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/id_rsa Username:docker}
	I1101 08:30:03.195235   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:30:03.195374   10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
	I1101 08:30:03.195394   10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:30:03.195714   10392 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/id_rsa Username:docker}
	I1101 08:30:03.196002   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:30:03.196132   10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
	I1101 08:30:03.196247   10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
	I1101 08:30:03.196278   10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:30:03.196446   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:30:03.196251   10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:30:03.196465   10392 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/id_rsa Username:docker}
	I1101 08:30:03.196886   10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
	I1101 08:30:03.196916   10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:30:03.197011   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:30:03.197178   10392 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/id_rsa Username:docker}
	I1101 08:30:03.197549   10392 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/id_rsa Username:docker}
	I1101 08:30:03.197736   10392 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1101 08:30:03.197780   10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
	I1101 08:30:03.197806   10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
	I1101 08:30:03.197807   10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:30:03.197851   10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:30:03.198030   10392 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/id_rsa Username:docker}
	I1101 08:30:03.198173   10392 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/id_rsa Username:docker}
	I1101 08:30:03.199257   10392 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1101 08:30:03.199280   10392 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1101 08:30:03.201991   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:30:03.202529   10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
	I1101 08:30:03.202565   10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:30:03.202745   10392 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/id_rsa Username:docker}
	W1101 08:30:03.391277   10392 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:59638->192.168.39.108:22: read: connection reset by peer
	I1101 08:30:03.391316   10392 retry.go:31] will retry after 285.134778ms: ssh: handshake failed: read tcp 192.168.39.1:59638->192.168.39.108:22: read: connection reset by peer
	W1101 08:30:03.434857   10392 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:59648->192.168.39.108:22: read: connection reset by peer
	I1101 08:30:03.434884   10392 retry.go:31] will retry after 359.33267ms: ssh: handshake failed: read tcp 192.168.39.1:59648->192.168.39.108:22: read: connection reset by peer
	W1101 08:30:03.434971   10392 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:59674->192.168.39.108:22: read: connection reset by peer
	I1101 08:30:03.434984   10392 retry.go:31] will retry after 238.429211ms: ssh: handshake failed: read tcp 192.168.39.1:59674->192.168.39.108:22: read: connection reset by peer
	W1101 08:30:03.435024   10392 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:59662->192.168.39.108:22: read: connection reset by peer
	I1101 08:30:03.435065   10392 retry.go:31] will retry after 357.609129ms: ssh: handshake failed: read tcp 192.168.39.1:59662->192.168.39.108:22: read: connection reset by peer
	I1101 08:30:03.706734   10392 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 08:30:03.706820   10392 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 08:30:04.042176   10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 08:30:04.061546   10392 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1101 08:30:04.061573   10392 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1101 08:30:04.138326   10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 08:30:04.167264   10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 08:30:04.199045   10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1101 08:30:04.230314   10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 08:30:04.332010   10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 08:30:04.366496   10392 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1101 08:30:04.366527   10392 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1101 08:30:04.388613   10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 08:30:04.418567   10392 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:04.418588   10392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1101 08:30:04.531931   10392 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1101 08:30:04.531953   10392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1101 08:30:04.602372   10392 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1101 08:30:04.602396   10392 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1101 08:30:04.623278   10392 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 08:30:04.623298   10392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1101 08:30:04.638958   10392 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1101 08:30:04.638986   10392 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1101 08:30:04.927745   10392 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1101 08:30:04.927769   10392 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1101 08:30:04.929411   10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 08:30:05.007339   10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 08:30:05.063646   10392 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 08:30:05.063674   10392 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 08:30:05.104918   10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:05.176550   10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1101 08:30:05.204105   10392 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1101 08:30:05.204141   10392 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1101 08:30:05.233124   10392 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1101 08:30:05.233154   10392 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1101 08:30:05.335011   10392 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 08:30:05.335037   10392 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 08:30:05.336054   10392 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1101 08:30:05.336076   10392 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1101 08:30:05.518105   10392 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1101 08:30:05.518133   10392 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1101 08:30:05.548240   10392 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1101 08:30:05.548273   10392 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1101 08:30:05.676546   10392 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1101 08:30:05.676575   10392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1101 08:30:05.716559   10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 08:30:05.775309   10392 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1101 08:30:05.775333   10392 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1101 08:30:05.841927   10392 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1101 08:30:05.841954   10392 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1101 08:30:06.040149   10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1101 08:30:06.209743   10392 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 08:30:06.209762   10392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1101 08:30:06.330963   10392 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1101 08:30:06.330995   10392 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1101 08:30:06.693643   10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 08:30:06.743815   10392 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1101 08:30:06.743846   10392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1101 08:30:07.159397   10392 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1101 08:30:07.159435   10392 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1101 08:30:07.171625   10392 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.464769922s)
	I1101 08:30:07.171664   10392 start.go:977] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1101 08:30:07.171674   10392 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.464910165s)
	I1101 08:30:07.171728   10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.129522115s)
	I1101 08:30:07.172532   10392 node_ready.go:35] waiting up to 6m0s for node "addons-468489" to be "Ready" ...
	I1101 08:30:07.183789   10392 node_ready.go:49] node "addons-468489" is "Ready"
	I1101 08:30:07.183821   10392 node_ready.go:38] duration metric: took 11.264748ms for node "addons-468489" to be "Ready" ...
	I1101 08:30:07.183834   10392 api_server.go:52] waiting for apiserver process to appear ...
	I1101 08:30:07.183888   10392 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 08:30:07.626405   10392 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1101 08:30:07.626427   10392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1101 08:30:07.680736   10392 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-468489" context rescaled to 1 replicas
	I1101 08:30:07.975695   10392 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1101 08:30:07.975717   10392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1101 08:30:08.316434   10392 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 08:30:08.316457   10392 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1101 08:30:08.580662   10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 08:30:09.574961   10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (5.407661305s)
	I1101 08:30:09.574999   10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.375928298s)
	I1101 08:30:09.575055   10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.344717575s)
	I1101 08:30:09.575084   10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.243046067s)
	I1101 08:30:09.575154   10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.18651527s)
	I1101 08:30:09.575373   10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.437015745s)
	I1101 08:30:10.508042   10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.578596704s)
	I1101 08:30:10.620951   10392 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1101 08:30:10.623710   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:30:10.624167   10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
	I1101 08:30:10.624195   10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:30:10.624418   10392 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/id_rsa Username:docker}
	I1101 08:30:11.047958   10392 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1101 08:30:11.147489   10392 addons.go:239] Setting addon gcp-auth=true in "addons-468489"
	I1101 08:30:11.147543   10392 host.go:66] Checking if "addons-468489" exists ...
	I1101 08:30:11.149825   10392 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1101 08:30:11.152708   10392 main.go:143] libmachine: domain addons-468489 has defined MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:30:11.153262   10392 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:e9", ip: ""} in network mk-addons-468489: {Iface:virbr1 ExpiryTime:2025-11-01 09:29:35 +0000 UTC Type:0 Mac:52:54:00:d2:1b:e9 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:addons-468489 Clientid:01:52:54:00:d2:1b:e9}
	I1101 08:30:11.153303   10392 main.go:143] libmachine: domain addons-468489 has defined IP address 192.168.39.108 and MAC address 52:54:00:d2:1b:e9 in network mk-addons-468489
	I1101 08:30:11.153474   10392 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/addons-468489/id_rsa Username:docker}
	I1101 08:30:12.408637   10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.401262606s)
	I1101 08:30:12.408670   10392 addons.go:480] Verifying addon ingress=true in "addons-468489"
	I1101 08:30:12.408766   10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.303806003s)
	W1101 08:30:12.408813   10392 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:12.408842   10392 retry.go:31] will retry after 288.598901ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:12.408867   10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.232232585s)
	I1101 08:30:12.408888   10392 addons.go:480] Verifying addon registry=true in "addons-468489"
	I1101 08:30:12.408933   10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.692338295s)
	I1101 08:30:12.408992   10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.368816743s)
	I1101 08:30:12.408997   10392 addons.go:480] Verifying addon metrics-server=true in "addons-468489"
	I1101 08:30:12.410411   10392 out.go:179] * Verifying ingress addon...
	I1101 08:30:12.411408   10392 out.go:179] * Verifying registry addon...
	I1101 08:30:12.411414   10392 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-468489 service yakd-dashboard -n yakd-dashboard
	
	I1101 08:30:12.412639   10392 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1101 08:30:12.413316   10392 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1101 08:30:12.503710   10392 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1101 08:30:12.503741   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:12.503722   10392 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1101 08:30:12.503759   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:12.621620   10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.927935366s)
	I1101 08:30:12.621669   10392 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.437759045s)
	W1101 08:30:12.621674   10392 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 08:30:12.621695   10392 api_server.go:72] duration metric: took 9.456787261s to wait for apiserver process to appear ...
	I1101 08:30:12.621699   10392 retry.go:31] will retry after 217.397158ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 08:30:12.621703   10392 api_server.go:88] waiting for apiserver healthz status ...
	I1101 08:30:12.621723   10392 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I1101 08:30:12.641132   10392 api_server.go:279] https://192.168.39.108:8443/healthz returned 200:
	ok
	I1101 08:30:12.642571   10392 api_server.go:141] control plane version: v1.34.1
	I1101 08:30:12.642592   10392 api_server.go:131] duration metric: took 20.8825ms to wait for apiserver health ...
	I1101 08:30:12.642600   10392 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 08:30:12.657340   10392 system_pods.go:59] 15 kube-system pods found
	I1101 08:30:12.657381   10392 system_pods.go:61] "amd-gpu-device-plugin-wx8s2" [81d7a980-35fc-40ae-a47f-4be99c0b6c65] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 08:30:12.657393   10392 system_pods.go:61] "coredns-66bc5c9577-ms7np" [d9442c37-8e1e-4201-9f54-a883e9756f4c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 08:30:12.657404   10392 system_pods.go:61] "coredns-66bc5c9577-sjgmx" [66422fdc-0c8f-4909-b971-478ee3ec6443] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 08:30:12.657410   10392 system_pods.go:61] "etcd-addons-468489" [264ceabd-3ca1-4077-89d1-f38eb22dffa5] Running
	I1101 08:30:12.657417   10392 system_pods.go:61] "kube-apiserver-addons-468489" [0f311ff5-25a6-4ac0-b279-0a23db6667f7] Running
	I1101 08:30:12.657426   10392 system_pods.go:61] "kube-controller-manager-addons-468489" [f14d55ce-f86e-497f-ad0d-8080ce321467] Running
	I1101 08:30:12.657433   10392 system_pods.go:61] "kube-ingress-dns-minikube" [36080b1f-6e52-4871-bf53-646c532b90bb] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 08:30:12.657441   10392 system_pods.go:61] "kube-proxy-d6zrs" [476d893f-eeca-41a3-aa64-4f3340875cdf] Running
	I1101 08:30:12.657445   10392 system_pods.go:61] "kube-scheduler-addons-468489" [7b378d38-fbcf-4987-b14d-3aa3c65a78de] Running
	I1101 08:30:12.657450   10392 system_pods.go:61] "metrics-server-85b7d694d7-fq64r" [fa41a986-93b3-4aff-bb56-494cf440e1f9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 08:30:12.657458   10392 system_pods.go:61] "nvidia-device-plugin-daemonset-f2qxl" [ec4ee384-540b-4a75-84b3-4e570d3d9f23] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 08:30:12.657470   10392 system_pods.go:61] "registry-6b586f9694-xfrhn" [f3392fde-46f3-42dc-832d-20224c4f0549] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 08:30:12.657478   10392 system_pods.go:61] "registry-creds-764b6fb674-kv2dx" [50f610f4-b848-4266-a771-a9ad1114d203] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 08:30:12.657487   10392 system_pods.go:61] "registry-proxy-rhvsz" [55e49aa2-d062-47e2-8c75-d338178ea4a8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 08:30:12.657494   10392 system_pods.go:61] "storage-provisioner" [4b0ce500-deaa-4b2b-9613-8479f762e6b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 08:30:12.657505   10392 system_pods.go:74] duration metric: took 14.89888ms to wait for pod list to return data ...
	I1101 08:30:12.657519   10392 default_sa.go:34] waiting for default service account to be created ...
	I1101 08:30:12.698016   10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:12.704231   10392 default_sa.go:45] found service account: "default"
	I1101 08:30:12.704250   10392 default_sa.go:55] duration metric: took 46.725168ms for default service account to be created ...
	I1101 08:30:12.704262   10392 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 08:30:12.755551   10392 system_pods.go:86] 17 kube-system pods found
	I1101 08:30:12.755589   10392 system_pods.go:89] "amd-gpu-device-plugin-wx8s2" [81d7a980-35fc-40ae-a47f-4be99c0b6c65] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 08:30:12.755600   10392 system_pods.go:89] "coredns-66bc5c9577-ms7np" [d9442c37-8e1e-4201-9f54-a883e9756f4c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 08:30:12.755614   10392 system_pods.go:89] "coredns-66bc5c9577-sjgmx" [66422fdc-0c8f-4909-b971-478ee3ec6443] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 08:30:12.755623   10392 system_pods.go:89] "etcd-addons-468489" [264ceabd-3ca1-4077-89d1-f38eb22dffa5] Running
	I1101 08:30:12.755633   10392 system_pods.go:89] "kube-apiserver-addons-468489" [0f311ff5-25a6-4ac0-b279-0a23db6667f7] Running
	I1101 08:30:12.755639   10392 system_pods.go:89] "kube-controller-manager-addons-468489" [f14d55ce-f86e-497f-ad0d-8080ce321467] Running
	I1101 08:30:12.755647   10392 system_pods.go:89] "kube-ingress-dns-minikube" [36080b1f-6e52-4871-bf53-646c532b90bb] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 08:30:12.755651   10392 system_pods.go:89] "kube-proxy-d6zrs" [476d893f-eeca-41a3-aa64-4f3340875cdf] Running
	I1101 08:30:12.755657   10392 system_pods.go:89] "kube-scheduler-addons-468489" [7b378d38-fbcf-4987-b14d-3aa3c65a78de] Running
	I1101 08:30:12.755668   10392 system_pods.go:89] "metrics-server-85b7d694d7-fq64r" [fa41a986-93b3-4aff-bb56-494cf440e1f9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 08:30:12.755676   10392 system_pods.go:89] "nvidia-device-plugin-daemonset-f2qxl" [ec4ee384-540b-4a75-84b3-4e570d3d9f23] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 08:30:12.755685   10392 system_pods.go:89] "registry-6b586f9694-xfrhn" [f3392fde-46f3-42dc-832d-20224c4f0549] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 08:30:12.755693   10392 system_pods.go:89] "registry-creds-764b6fb674-kv2dx" [50f610f4-b848-4266-a771-a9ad1114d203] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 08:30:12.755701   10392 system_pods.go:89] "registry-proxy-rhvsz" [55e49aa2-d062-47e2-8c75-d338178ea4a8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 08:30:12.755707   10392 system_pods.go:89] "snapshot-controller-7d9fbc56b8-79mgm" [5a850e29-a396-460b-9e3a-b1253224ae87] Pending
	I1101 08:30:12.755715   10392 system_pods.go:89] "snapshot-controller-7d9fbc56b8-p4lmm" [a0898425-e644-493d-a304-1fb4bcba103b] Pending
	I1101 08:30:12.755722   10392 system_pods.go:89] "storage-provisioner" [4b0ce500-deaa-4b2b-9613-8479f762e6b8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 08:30:12.755730   10392 system_pods.go:126] duration metric: took 51.463181ms to wait for k8s-apps to be running ...
	I1101 08:30:12.755740   10392 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 08:30:12.755808   10392 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 08:30:12.840103   10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 08:30:12.925433   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:12.928880   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:13.427147   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:13.429057   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:13.953781   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:13.958543   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:14.011569   10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.430862697s)
	I1101 08:30:14.011603   10392 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-468489"
	I1101 08:30:14.011650   10392 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.861791562s)
	I1101 08:30:14.013557   10392 out.go:179] * Verifying csi-hostpath-driver addon...
	I1101 08:30:14.013566   10392 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 08:30:14.015062   10392 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1101 08:30:14.015584   10392 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1101 08:30:14.016600   10392 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1101 08:30:14.016616   10392 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1101 08:30:14.024455   10392 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1101 08:30:14.024472   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:14.181854   10392 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1101 08:30:14.181883   10392 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1101 08:30:14.321285   10392 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 08:30:14.321317   10392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1101 08:30:14.427287   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:14.427312   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:14.483889   10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 08:30:14.525108   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:14.918308   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:14.918784   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:15.019838   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:15.419098   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:15.419232   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:15.519747   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:15.922265   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:15.922649   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:16.038244   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:16.164717   10392 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.408881174s)
	I1101 08:30:16.164759   10392 system_svc.go:56] duration metric: took 3.409014547s WaitForService to wait for kubelet
	I1101 08:30:16.164772   10392 kubeadm.go:587] duration metric: took 12.999862562s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 08:30:16.164797   10392 node_conditions.go:102] verifying NodePressure condition ...
	I1101 08:30:16.164859   10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.324712874s)
	I1101 08:30:16.165947   10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.46789387s)
	W1101 08:30:16.165980   10392 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:16.165999   10392 retry.go:31] will retry after 542.604514ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:16.181086   10392 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 08:30:16.181114   10392 node_conditions.go:123] node cpu capacity is 2
	I1101 08:30:16.181127   10392 node_conditions.go:105] duration metric: took 16.322752ms to run NodePressure ...
	I1101 08:30:16.181141   10392 start.go:242] waiting for startup goroutines ...
	I1101 08:30:16.517975   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:16.527560   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:16.580333   10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.096403789s)
	I1101 08:30:16.581452   10392 addons.go:480] Verifying addon gcp-auth=true in "addons-468489"
	I1101 08:30:16.583367   10392 out.go:179] * Verifying gcp-auth addon...
	I1101 08:30:16.585595   10392 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1101 08:30:16.607796   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:16.615190   10392 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1101 08:30:16.615219   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:16.709454   10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:16.919591   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:16.922056   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:17.027344   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:17.093029   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:17.421113   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:17.421311   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:17.522202   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:17.591344   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:17.921720   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:17.922261   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:18.020928   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:18.089681   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:18.096659   10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.387169921s)
	W1101 08:30:18.096691   10392 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:18.096708   10392 retry.go:31] will retry after 393.17056ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:18.421778   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:18.422684   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:18.490844   10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:18.522178   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:18.592305   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:18.918881   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:18.922142   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:19.019524   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:19.090800   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:19.423633   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:19.423719   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:19.521303   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:19.586618   10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.095737641s)
	W1101 08:30:19.586655   10392 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:19.586675   10392 retry.go:31] will retry after 1.214746941s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:19.589134   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:19.918938   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:19.920707   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:20.019762   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:20.090329   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:20.416683   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:20.418154   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:20.522059   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:20.589941   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:20.802301   10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:20.918468   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:20.922497   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:21.026120   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:21.094531   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:21.417174   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:21.420657   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:21.521697   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:21.595169   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:21.806494   10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.004156159s)
	W1101 08:30:21.806525   10392 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:21.806541   10392 retry.go:31] will retry after 1.026170972s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:21.918142   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:21.918236   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:22.020182   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:22.089373   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:22.419502   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:22.420477   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:22.520201   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:22.590080   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:22.833451   10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:22.916821   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:22.918179   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:23.022665   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:23.089467   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:23.417348   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:23.417389   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:23.519302   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 08:30:23.540890   10392 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:23.540918   10392 retry.go:31] will retry after 1.13933478s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:23.590615   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:23.919332   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:23.921317   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:24.021511   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:24.092057   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:24.420242   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:24.422658   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:24.519220   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:24.589781   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:24.680947   10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:24.917442   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:24.922442   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:25.021628   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:25.091072   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:25.418545   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:25.421872   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:25.520640   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:25.590720   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:25.845301   10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.16430914s)
	W1101 08:30:25.845349   10392 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:25.845368   10392 retry.go:31] will retry after 3.96310162s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:25.921607   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:25.921649   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:26.019141   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:26.090061   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:26.418594   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:26.419127   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:26.519948   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:26.588684   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:26.919894   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:26.922829   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:27.020550   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:27.089908   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:27.417252   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:27.417814   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:27.521388   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:27.589099   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:27.919396   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:27.919539   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:28.020522   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:28.121101   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:28.418698   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:28.421501   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:28.520688   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:28.590420   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:28.917047   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:28.917574   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:29.021383   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:29.088755   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:29.422862   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:29.422901   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:29.519482   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:29.589348   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:29.808598   10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:29.920469   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:29.920955   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:30.023592   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:30.089386   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:30.417062   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:30.421598   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:30.522898   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:30.588790   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:30.920260   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:30.920720   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:30.933041   10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.124408751s)
	W1101 08:30:30.933084   10392 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:30.933105   10392 retry.go:31] will retry after 5.481687476s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:31.020957   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:31.090737   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:31.418029   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:31.418038   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:31.519809   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:31.590483   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:31.919481   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:31.919666   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:32.020231   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:32.089015   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:32.423724   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:32.424114   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:32.521755   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:32.589794   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:32.916498   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:32.916609   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:33.019972   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:33.090813   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:33.419759   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:33.422047   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:33.521188   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:33.591025   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:33.926737   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:33.928067   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:34.019566   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:34.090038   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:34.418505   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:34.419727   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:34.521252   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:34.591582   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:34.916268   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:34.919883   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:35.021167   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:35.089327   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:35.416145   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:35.421286   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:35.525977   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:35.592261   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:35.998513   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:36.001248   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:36.023437   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:36.097712   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:36.415306   10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:36.418799   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:36.427136   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:36.521792   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:36.589075   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:36.918552   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:36.919948   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:37.143415   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:37.145063   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:37.420055   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:37.421857   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:37.524770   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:37.590063   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:37.646763   10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.231415583s)
	W1101 08:30:37.646807   10392 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:37.646831   10392 retry.go:31] will retry after 5.025033795s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:37.916790   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:37.919516   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:38.127633   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:38.131524   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:38.418999   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:38.420016   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:38.519313   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:38.590063   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:38.916193   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:38.917179   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:39.019750   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:39.088877   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:39.417251   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:39.417889   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:39.519139   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:39.589109   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:39.917230   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:39.917415   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:40.019718   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:40.088508   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:40.420635   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:40.420774   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:40.520338   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:40.590524   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:40.917851   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:40.917978   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:41.022828   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:41.088912   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:41.417017   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:41.418828   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:41.519781   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:41.590160   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:41.917489   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:41.917618   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:42.021115   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:42.089805   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:42.419348   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:42.419429   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:42.519560   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:42.591927   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:42.673089   10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:42.995204   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:42.999753   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:43.020682   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:43.090517   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:43.419475   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:43.419833   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:43.519931   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:43.589653   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:43.708353   10392 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.035222084s)
	W1101 08:30:43.708390   10392 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:43.708406   10392 retry.go:31] will retry after 12.909151826s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:43.919604   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:43.921302   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:44.020371   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:44.089479   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:44.418541   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:44.419533   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:44.519724   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:44.588645   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:44.919796   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:44.920762   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:45.020316   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:45.089149   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:45.420554   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:45.421031   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:45.523134   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:45.591903   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:45.922801   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:45.929079   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:46.024486   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:46.090368   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:46.417598   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:46.418296   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:46.522704   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:46.589755   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:46.917297   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:46.919766   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:47.020277   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:47.089472   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:47.417430   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:47.417584   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:30:47.519146   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:47.589548   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:47.919717   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:47.919898   10392 kapi.go:107] duration metric: took 35.506582059s to wait for kubernetes.io/minikube-addons=registry ...
	I1101 08:30:48.019665   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:48.090391   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:48.417512   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:48.519656   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:48.588956   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:48.917653   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:49.020172   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:49.090533   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:49.418634   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:49.521351   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:49.590573   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:49.916932   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:50.023166   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:50.089650   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:50.474183   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:50.523407   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:50.590015   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:50.916190   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:51.020176   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:51.088989   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:51.416685   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:51.519708   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:51.589260   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:51.917074   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:52.021703   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:52.092682   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:52.417663   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:52.532452   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:52.591689   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:52.917774   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:53.023675   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:53.089486   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:53.416594   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:53.519397   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:53.589294   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:53.922137   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:54.022219   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:54.089299   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:54.416490   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:54.520797   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:54.589483   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:54.916254   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:55.019416   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:55.089279   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:55.418699   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:55.521771   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:55.590070   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:55.918018   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:56.025849   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:56.091126   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:56.418476   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:56.520382   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:56.590299   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:56.618451   10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:30:56.918310   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:57.021946   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:57.090943   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:57.420361   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:57.520065   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:57.591085   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 08:30:57.617632   10392 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:57.617657   10392 retry.go:31] will retry after 10.651159929s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:30:57.916015   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:58.019562   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:58.090239   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:58.416910   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:58.519503   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:58.591471   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:58.916375   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:59.023969   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:59.089071   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:59.417961   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:30:59.519122   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:30:59.591321   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:30:59.917504   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:00.020248   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:00.089162   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:00.416900   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:00.520448   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:00.590850   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:00.916129   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:01.022270   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:01.090323   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:01.417636   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:01.520753   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:01.590877   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:01.917174   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:02.021562   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:02.092549   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:02.417520   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:02.520436   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:02.591323   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:02.917923   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:03.019980   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:03.090019   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:03.417173   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:03.520228   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:03.589746   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:03.917717   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:04.020877   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:04.088765   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:04.419471   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:04.524278   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:04.594734   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:04.916135   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:05.021832   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:05.127161   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:05.421518   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:05.520095   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:05.588850   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:05.917561   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:06.034402   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:06.458707   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:06.488355   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:06.572404   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:06.595902   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:06.917273   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:07.024838   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:07.089032   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:07.417280   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:07.520270   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:07.592695   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:07.921152   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:08.019668   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:08.090085   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:08.269341   10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:31:08.420123   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:08.524518   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:08.622450   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:08.920655   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:09.020533   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:09.089928   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 08:31:09.155543   10392 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:31:09.155586   10392 retry.go:31] will retry after 26.236601913s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:31:09.419702   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:09.519993   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:09.590891   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:09.916975   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:10.025409   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:10.125561   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:10.416907   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:10.519647   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:10.591199   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:10.918878   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:11.020608   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:11.091111   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:11.417383   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:11.524179   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:11.622482   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:11.915845   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:12.020457   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:12.090934   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:12.417668   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:12.520343   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:12.589896   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:12.918173   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:13.020032   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:13.089621   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:13.416806   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:13.519633   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:13.592374   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:14.191421   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:14.231982   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:14.233617   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:14.417666   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:14.520902   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:14.589187   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:14.919389   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:15.019947   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:15.089813   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:15.420272   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:15.521158   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:15.589626   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:15.916179   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:16.020699   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:16.089914   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:16.419192   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:16.522740   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:16.589928   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:16.919772   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:17.022143   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:17.091646   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:17.417909   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:17.520226   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:17.590145   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:17.921944   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:18.018671   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:18.089728   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:18.418328   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:18.524732   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:18.590416   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:18.918480   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:19.020165   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:31:19.092887   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:19.420855   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:19.716075   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:19.716251   10392 kapi.go:107] duration metric: took 1m5.700661688s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1101 08:31:19.918980   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:20.093482   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:20.419301   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:20.589039   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:20.918988   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:21.089955   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:21.420019   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:21.588793   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:21.916608   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:22.091234   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:22.420172   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:22.593020   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:23.001654   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:23.091242   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:23.416920   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:23.588896   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:24.051193   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:24.089164   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:24.416680   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:24.590984   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:24.920948   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:25.089963   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:25.417063   10392 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:31:25.589667   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:25.916492   10392 kapi.go:107] duration metric: took 1m13.503852258s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1101 08:31:26.090136   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:26.589052   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:27.097000   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:27.591787   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:28.092228   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:28.589623   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:29.090157   10392 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:31:29.589358   10392 kapi.go:107] duration metric: took 1m13.00376217s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1101 08:31:29.591226   10392 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-468489 cluster.
	I1101 08:31:29.592684   10392 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1101 08:31:29.594034   10392 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1101 08:31:35.392803   10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1101 08:31:36.104186   10392 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:31:36.104242   10392 retry.go:31] will retry after 24.243133996s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:32:00.348418   10392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1101 08:32:01.045944   10392 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 08:32:01.046043   10392 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1101 08:32:01.047841   10392 out.go:179] * Enabled addons: amd-gpu-device-plugin, registry-creds, cloud-spanner, ingress-dns, nvidia-device-plugin, storage-provisioner, default-storageclass, storage-provisioner-rancher, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1101 08:32:01.049348   10392 addons.go:515] duration metric: took 1m57.884408151s for enable addons: enabled=[amd-gpu-device-plugin registry-creds cloud-spanner ingress-dns nvidia-device-plugin storage-provisioner default-storageclass storage-provisioner-rancher metrics-server yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1101 08:32:01.049387   10392 start.go:247] waiting for cluster config update ...
	I1101 08:32:01.049411   10392 start.go:256] writing updated cluster config ...
	I1101 08:32:01.049638   10392 ssh_runner.go:195] Run: rm -f paused
	I1101 08:32:01.055666   10392 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 08:32:01.059389   10392 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-sjgmx" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:32:01.064789   10392 pod_ready.go:94] pod "coredns-66bc5c9577-sjgmx" is "Ready"
	I1101 08:32:01.064809   10392 pod_ready.go:86] duration metric: took 5.402573ms for pod "coredns-66bc5c9577-sjgmx" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:32:01.067104   10392 pod_ready.go:83] waiting for pod "etcd-addons-468489" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:32:01.072548   10392 pod_ready.go:94] pod "etcd-addons-468489" is "Ready"
	I1101 08:32:01.072564   10392 pod_ready.go:86] duration metric: took 5.445456ms for pod "etcd-addons-468489" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:32:01.075233   10392 pod_ready.go:83] waiting for pod "kube-apiserver-addons-468489" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:32:01.079928   10392 pod_ready.go:94] pod "kube-apiserver-addons-468489" is "Ready"
	I1101 08:32:01.079946   10392 pod_ready.go:86] duration metric: took 4.697885ms for pod "kube-apiserver-addons-468489" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:32:01.082185   10392 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-468489" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:32:01.460536   10392 pod_ready.go:94] pod "kube-controller-manager-addons-468489" is "Ready"
	I1101 08:32:01.460568   10392 pod_ready.go:86] duration metric: took 378.366246ms for pod "kube-controller-manager-addons-468489" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:32:01.661787   10392 pod_ready.go:83] waiting for pod "kube-proxy-d6zrs" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:32:02.061035   10392 pod_ready.go:94] pod "kube-proxy-d6zrs" is "Ready"
	I1101 08:32:02.061062   10392 pod_ready.go:86] duration metric: took 399.253022ms for pod "kube-proxy-d6zrs" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:32:02.259853   10392 pod_ready.go:83] waiting for pod "kube-scheduler-addons-468489" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:32:02.660961   10392 pod_ready.go:94] pod "kube-scheduler-addons-468489" is "Ready"
	I1101 08:32:02.660985   10392 pod_ready.go:86] duration metric: took 401.111669ms for pod "kube-scheduler-addons-468489" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:32:02.660996   10392 pod_ready.go:40] duration metric: took 1.605305871s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 08:32:02.703824   10392 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 08:32:02.705894   10392 out.go:179] * Done! kubectl is now configured to use "addons-468489" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.156648210Z" level=debug msg="Received container exit code: 0, message: " file="oci/runtime_oci.go:670" id=9043388d-cdd0-4ee3-9248-f1dd91a81ac0 name=/runtime.v1.RuntimeService/ExecSync
	Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.156822520Z" level=debug msg="Response: &ExecSyncResponse{Stdout:[FILTERED],Stderr:[],ExitCode:0,}" file="otel-collector/interceptors.go:74" id=9043388d-cdd0-4ee3-9248-f1dd91a81ac0 name=/runtime.v1.RuntimeService/ExecSync
	Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.172948380Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6992720f-595d-41af-8143-a5ea77d6e484 name=/runtime.v1.RuntimeService/Version
	Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.173314779Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6992720f-595d-41af-8143-a5ea77d6e484 name=/runtime.v1.RuntimeService/Version
	Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.175294557Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=afa7b699-c0e0-4374-9e11-756ebf4c1dc1 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.177535971Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761986107177505680,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589266,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=afa7b699-c0e0-4374-9e11-756ebf4c1dc1 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.178170517Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6fb2b345-b7aa-4fca-abcb-8f876c0cb862 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.178239876Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6fb2b345-b7aa-4fca-abcb-8f876c0cb862 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.178607432Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e65569ad242eed2f42be192467aae915357a9793d26ed5a6e7945d301ba01a3f,PodSandboxId:b923e49603933e8a8bf8cde5cb22d75aa00ed15505044bdc2f3722730bc9692a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1761985965160235434,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 02be2896-2e22-4268-9b74-1264e195dc37,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfe5ebc20e7d29df338579c7b935940f08efecb6543073275401ad73613c0441,PodSandboxId:e3bf65b1e951bd50ff236359e95effbce0685e2362bcf57334b104bc448dce0a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761985927233968486,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 41aabf94-d190-48f2-ba3e-eab75a7075ad,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7efcdb0a07d6dd76b1c2d4864c48fc4d018b3f1fcf2047055101fb357ab5402,PodSandboxId:fbebd578d37fc65979c21d716efb62aaa0bdc5700000ae97a8fd119f04966082,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761985884810001741,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-8fm8x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ad6e2792-c8ab-4c5a-8932-7b144019c8b1,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5966eef3a473b1f609b7d8222b6f5cb744341011c83cf4de2c23e29dd53513f8,PodSandboxId:038f3ad417f7d8ea11852bbe5169dde89b7f55fc98d1efc2cd878a2fa5f77fa2,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302fe
afeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761985863274782285,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-x52f8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 52c90c76-9a17-481e-8bea-e4766c94af1d,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c723ffbb30e65a5e0a493cfdaaa9d4424e77ffc8ed9a9423c1fd00685b6eb142,PodSandboxId:aa7709f3fd4dd2b51b301f10a437f9fe28dfbf24957afc239b7f8ef9683a17ac,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761985863155849556,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jxdt4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2dbe5a05-8c54-4c06-bf27-0e68d39c6fc8,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0042304deba7bc2052b30a988a7b431d59594251701f19184b8f62d56f8ca692,PodSandboxId:02d520347d1562581e7699092ed7a1defec8d104a7505c32444e7dd40c4c8fef,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761985858850992899,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-gv7nr,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: c1d68823-6547-42f4-8cfa-83aa02d048e0,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df886be9fafbf70a8d3cd565b72e63e9accaafcf77ccb67574da6ae4ccadbc36,PodSandboxId:2ce0d8a6fdd1004a3d61e28e44984e721ec23a2cfe2ef24da55bbe08fbad7e0c,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-i
ngress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761985838257325571,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36080b1f-6e52-4871-bf53-646c532b90bb,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e68896ea450e3830e7816ff23f703d6390da464016ebb409a2b0cd736e24cc3,PodSandboxId:e1a7671351bd03a5496e4f474cdc3f1f7931721e30a7857
380bfb8893edc4253,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761985814070432986,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-wx8s2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81d7a980-35fc-40ae-a47f-4be99c0b6c65,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85b250df1323cf03f8796cb8581666b4a6c3e33180cbaa4d3112d86c9d3da69d,PodSandboxId:d6bdbbd
a4b8eebea54a6d4b11612cc3421e4fd0b33ad5049766f420887daae51,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761985812425666423,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b0ce500-deaa-4b2b-9613-8479f762e6b8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0410edc0c61ce504b45bbb3f18b0f22536d22712f7391c14e439c94e84304edb,PodSandboxId:3190eac37d4b6a8aa43
e4721cd12ebe3adcbe31e9e8d80fa895d9682468d2506,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761985804664891001,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-sjgmx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66422fdc-0c8f-4909-b971-478ee3ec6443,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86baff1e8fbedaeb7a8e43e0536bd9a60d4f37c3aa20f5c70c599ad6b4d6bc3c,PodSandboxId:848ef4cde81cd7d34f8bbf866ab7a5b6b153a6bd60067090256c44b39c2c1667,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761985803690243517,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6zrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 476d893f-eeca-41a3-aa64-4f3340875cdf,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29b52dfbf87aab4d25a48bfefa8ee19291d8e4de1116564600b891281e618d92,PodSandboxId:a0e49d4d3f46015d59fa7e02999142e82cb51c4839386803064e9b872786d6eb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761985792356872335,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-468489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 934186350482a3c9b581189c456f4b92,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"ho
stPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff69c00f6f21b264dd70ef4d32dceab07331462a53d1976d7851c9187893a8b8,PodSandboxId:43e34f9c901bf02badeea564568695021a6623629ca7ed1c4bf81e9643b167ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761985792351464371,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-468489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f99b2c9fa1b7864b1a6ebbc1ce609e0c,},Annotations:map[string]str
ing{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b86f33110d8c4f52aa6c4337dccf801e3577bfad86423097372aa5d5887c14b,PodSandboxId:27bf80994e081a92e83e82c505d6a683c79d1a7ff80e3015d11ae5e017278ac0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761985792321663910,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons
-468489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67b76248b00540f35ccebf20c3a3df87,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee77fcda0b4b15a8180e149cb83f9d4052c608c26842ed0e17df21e75e99285a,PodSandboxId:a0ba435c55f8d005babe77ecce58d2ef5237e8f0c69d8b47e160fd982ae90c17,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761985792312766109,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-468489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89418dba440d5b9db768df6f8152cfb8,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6fb2b345-b7aa-4fca-abcb-8f876c0cb862 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.218229482Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=29ee6f96-c333-4edc-8a93-0d5210334203 name=/runtime.v1.RuntimeService/Version
	Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.218301762Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=29ee6f96-c333-4edc-8a93-0d5210334203 name=/runtime.v1.RuntimeService/Version
	Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.220181907Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fff96c8c-c9b4-4cb3-b2c9-ef6ab1920b70 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.221601728Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761986107221571443,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589266,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fff96c8c-c9b4-4cb3-b2c9-ef6ab1920b70 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.222325284Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=13ef7b32-4904-4359-a158-2cbabf452340 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.222437316Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=13ef7b32-4904-4359-a158-2cbabf452340 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.223240537Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e65569ad242eed2f42be192467aae915357a9793d26ed5a6e7945d301ba01a3f,PodSandboxId:b923e49603933e8a8bf8cde5cb22d75aa00ed15505044bdc2f3722730bc9692a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1761985965160235434,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 02be2896-2e22-4268-9b74-1264e195dc37,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfe5ebc20e7d29df338579c7b935940f08efecb6543073275401ad73613c0441,PodSandboxId:e3bf65b1e951bd50ff236359e95effbce0685e2362bcf57334b104bc448dce0a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761985927233968486,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 41aabf94-d190-48f2-ba3e-eab75a7075ad,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7efcdb0a07d6dd76b1c2d4864c48fc4d018b3f1fcf2047055101fb357ab5402,PodSandboxId:fbebd578d37fc65979c21d716efb62aaa0bdc5700000ae97a8fd119f04966082,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761985884810001741,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-8fm8x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ad6e2792-c8ab-4c5a-8932-7b144019c8b1,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5966eef3a473b1f609b7d8222b6f5cb744341011c83cf4de2c23e29dd53513f8,PodSandboxId:038f3ad417f7d8ea11852bbe5169dde89b7f55fc98d1efc2cd878a2fa5f77fa2,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302fe
afeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761985863274782285,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-x52f8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 52c90c76-9a17-481e-8bea-e4766c94af1d,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c723ffbb30e65a5e0a493cfdaaa9d4424e77ffc8ed9a9423c1fd00685b6eb142,PodSandboxId:aa7709f3fd4dd2b51b301f10a437f9fe28dfbf24957afc239b7f8ef9683a17ac,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761985863155849556,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jxdt4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2dbe5a05-8c54-4c06-bf27-0e68d39c6fc8,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0042304deba7bc2052b30a988a7b431d59594251701f19184b8f62d56f8ca692,PodSandboxId:02d520347d1562581e7699092ed7a1defec8d104a7505c32444e7dd40c4c8fef,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761985858850992899,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-gv7nr,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: c1d68823-6547-42f4-8cfa-83aa02d048e0,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df886be9fafbf70a8d3cd565b72e63e9accaafcf77ccb67574da6ae4ccadbc36,PodSandboxId:2ce0d8a6fdd1004a3d61e28e44984e721ec23a2cfe2ef24da55bbe08fbad7e0c,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-i
ngress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761985838257325571,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36080b1f-6e52-4871-bf53-646c532b90bb,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e68896ea450e3830e7816ff23f703d6390da464016ebb409a2b0cd736e24cc3,PodSandboxId:e1a7671351bd03a5496e4f474cdc3f1f7931721e30a7857
380bfb8893edc4253,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761985814070432986,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-wx8s2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81d7a980-35fc-40ae-a47f-4be99c0b6c65,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85b250df1323cf03f8796cb8581666b4a6c3e33180cbaa4d3112d86c9d3da69d,PodSandboxId:d6bdbbd
a4b8eebea54a6d4b11612cc3421e4fd0b33ad5049766f420887daae51,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761985812425666423,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b0ce500-deaa-4b2b-9613-8479f762e6b8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0410edc0c61ce504b45bbb3f18b0f22536d22712f7391c14e439c94e84304edb,PodSandboxId:3190eac37d4b6a8aa43
e4721cd12ebe3adcbe31e9e8d80fa895d9682468d2506,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761985804664891001,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-sjgmx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66422fdc-0c8f-4909-b971-478ee3ec6443,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86baff1e8fbedaeb7a8e43e0536bd9a60d4f37c3aa20f5c70c599ad6b4d6bc3c,PodSandboxId:848ef4cde81cd7d34f8bbf866ab7a5b6b153a6bd60067090256c44b39c2c1667,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761985803690243517,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6zrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 476d893f-eeca-41a3-aa64-4f3340875cdf,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29b52dfbf87aab4d25a48bfefa8ee19291d8e4de1116564600b891281e618d92,PodSandboxId:a0e49d4d3f46015d59fa7e02999142e82cb51c4839386803064e9b872786d6eb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761985792356872335,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-468489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 934186350482a3c9b581189c456f4b92,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"ho
stPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff69c00f6f21b264dd70ef4d32dceab07331462a53d1976d7851c9187893a8b8,PodSandboxId:43e34f9c901bf02badeea564568695021a6623629ca7ed1c4bf81e9643b167ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761985792351464371,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-468489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f99b2c9fa1b7864b1a6ebbc1ce609e0c,},Annotations:map[string]str
ing{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b86f33110d8c4f52aa6c4337dccf801e3577bfad86423097372aa5d5887c14b,PodSandboxId:27bf80994e081a92e83e82c505d6a683c79d1a7ff80e3015d11ae5e017278ac0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761985792321663910,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons
-468489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67b76248b00540f35ccebf20c3a3df87,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee77fcda0b4b15a8180e149cb83f9d4052c608c26842ed0e17df21e75e99285a,PodSandboxId:a0ba435c55f8d005babe77ecce58d2ef5237e8f0c69d8b47e160fd982ae90c17,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761985792312766109,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-468489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89418dba440d5b9db768df6f8152cfb8,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=13ef7b32-4904-4359-a158-2cbabf452340 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.264462062Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0f93b806-4836-4298-8197-c1d7d3afe6b0 name=/runtime.v1.RuntimeService/Version
	Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.264551985Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0f93b806-4836-4298-8197-c1d7d3afe6b0 name=/runtime.v1.RuntimeService/Version
	Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.266098864Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=74313532-8559-4dc4-86e8-4f96f99fffff name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.267478232Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761986107267448623,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589266,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=74313532-8559-4dc4-86e8-4f96f99fffff name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.268249770Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=657d974b-51e7-470b-95b1-6d2482433979 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.268310012Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=657d974b-51e7-470b-95b1-6d2482433979 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.269141565Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e65569ad242eed2f42be192467aae915357a9793d26ed5a6e7945d301ba01a3f,PodSandboxId:b923e49603933e8a8bf8cde5cb22d75aa00ed15505044bdc2f3722730bc9692a,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9,State:CONTAINER_RUNNING,CreatedAt:1761985965160235434,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 02be2896-2e22-4268-9b74-1264e195dc37,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfe5ebc20e7d29df338579c7b935940f08efecb6543073275401ad73613c0441,PodSandboxId:e3bf65b1e951bd50ff236359e95effbce0685e2362bcf57334b104bc448dce0a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1761985927233968486,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 41aabf94-d190-48f2-ba3e-eab75a7075ad,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7efcdb0a07d6dd76b1c2d4864c48fc4d018b3f1fcf2047055101fb357ab5402,PodSandboxId:fbebd578d37fc65979c21d716efb62aaa0bdc5700000ae97a8fd119f04966082,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c44d76c3213ea875be38abca61688c1173da6ee1815f1ce330a2d93add531e32,State:CONTAINER_RUNNING,CreatedAt:1761985884810001741,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-675c5ddd98-8fm8x,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ad6e2792-c8ab-4c5a-8932-7b144019c8b1,},Annotations:map[string]string{io.kubernetes.
container.hash: 36aef26,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:5966eef3a473b1f609b7d8222b6f5cb744341011c83cf4de2c23e29dd53513f8,PodSandboxId:038f3ad417f7d8ea11852bbe5169dde89b7f55fc98d1efc2cd878a2fa5f77fa2,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:08cfe302fe
afeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761985863274782285,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-x52f8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 52c90c76-9a17-481e-8bea-e4766c94af1d,},Annotations:map[string]string{io.kubernetes.container.hash: 166f2edf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c723ffbb30e65a5e0a493cfdaaa9d4424e77ffc8ed9a9423c1fd00685b6eb142,PodSandboxId:aa7709f3fd4dd2b51b301f10a437f9fe28dfbf24957afc239b7f8ef9683a17ac,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2,State:CONTAINER_EXITED,CreatedAt:1761985863155849556,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-jxdt4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 2dbe5a05-8c54-4c06-bf27-0e68d39c6fc8,},Annotations:map[string]string{io.kubernetes.container.hash: 3193dfde,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0042304deba7bc2052b30a988a7b431d59594251701f19184b8f62d56f8ca692,PodSandboxId:02d520347d1562581e7699092ed7a1defec8d104a7505c32444e7dd40c4c8fef,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38dca7434d5f28a7ced293ea76279adbabf08af32ee48a29bab2668b8ea7401f,State:CONTAINER_RUNNING,CreatedAt:1761985858850992899,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-gv7nr,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: c1d68823-6547-42f4-8cfa-83aa02d048e0,},Annotations:map[string]string{io.kubernetes.container.hash: f68894e6,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df886be9fafbf70a8d3cd565b72e63e9accaafcf77ccb67574da6ae4ccadbc36,PodSandboxId:2ce0d8a6fdd1004a3d61e28e44984e721ec23a2cfe2ef24da55bbe08fbad7e0c,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/minikube-i
ngress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b6ab53fbfedaa9592ce8777a49eec3483e53861fd2d33711cd18e514eefc3556,State:CONTAINER_RUNNING,CreatedAt:1761985838257325571,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36080b1f-6e52-4871-bf53-646c532b90bb,},Annotations:map[string]string{io.kubernetes.container.hash: 1c2df62c,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e68896ea450e3830e7816ff23f703d6390da464016ebb409a2b0cd736e24cc3,PodSandboxId:e1a7671351bd03a5496e4f474cdc3f1f7931721e30a7857
380bfb8893edc4253,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1761985814070432986,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-wx8s2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 81d7a980-35fc-40ae-a47f-4be99c0b6c65,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85b250df1323cf03f8796cb8581666b4a6c3e33180cbaa4d3112d86c9d3da69d,PodSandboxId:d6bdbbd
a4b8eebea54a6d4b11612cc3421e4fd0b33ad5049766f420887daae51,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761985812425666423,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b0ce500-deaa-4b2b-9613-8479f762e6b8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0410edc0c61ce504b45bbb3f18b0f22536d22712f7391c14e439c94e84304edb,PodSandboxId:3190eac37d4b6a8aa43
e4721cd12ebe3adcbe31e9e8d80fa895d9682468d2506,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761985804664891001,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-sjgmx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66422fdc-0c8f-4909-b971-478ee3ec6443,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"pr
otocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86baff1e8fbedaeb7a8e43e0536bd9a60d4f37c3aa20f5c70c599ad6b4d6bc3c,PodSandboxId:848ef4cde81cd7d34f8bbf866ab7a5b6b153a6bd60067090256c44b39c2c1667,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761985803690243517,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d6zrs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 476d893f-eeca-41a3-aa64-4f3340875cdf,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.
container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29b52dfbf87aab4d25a48bfefa8ee19291d8e4de1116564600b891281e618d92,PodSandboxId:a0e49d4d3f46015d59fa7e02999142e82cb51c4839386803064e9b872786d6eb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761985792356872335,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-468489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 934186350482a3c9b581189c456f4b92,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"ho
stPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff69c00f6f21b264dd70ef4d32dceab07331462a53d1976d7851c9187893a8b8,PodSandboxId:43e34f9c901bf02badeea564568695021a6623629ca7ed1c4bf81e9643b167ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761985792351464371,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-468489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f99b2c9fa1b7864b1a6ebbc1ce609e0c,},Annotations:map[string]str
ing{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b86f33110d8c4f52aa6c4337dccf801e3577bfad86423097372aa5d5887c14b,PodSandboxId:27bf80994e081a92e83e82c505d6a683c79d1a7ff80e3015d11ae5e017278ac0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761985792321663910,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons
-468489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67b76248b00540f35ccebf20c3a3df87,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee77fcda0b4b15a8180e149cb83f9d4052c608c26842ed0e17df21e75e99285a,PodSandboxId:a0ba435c55f8d005babe77ecce58d2ef5237e8f0c69d8b47e160fd982ae90c17,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761985792312766109,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-468489,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89418dba440d5b9db768df6f8152cfb8,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=657d974b-51e7-470b-95b1-6d2482433979 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.284610907Z" level=debug msg="Content-Type from manifest GET is \"application/vnd.docker.distribution.manifest.list.v2+json\"" file="docker/docker_client.go:964"
	Nov 01 08:35:07 addons-468489 crio[815]: time="2025-11-01 08:35:07.284905226Z" level=debug msg="GET https://registry-1.docker.io/v2/kicbase/echo-server/manifests/sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" file="docker/docker_client.go:631"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e65569ad242ee       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                              2 minutes ago       Running             nginx                     0                   b923e49603933       nginx
	bfe5ebc20e7d2       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   e3bf65b1e951b       busybox
	b7efcdb0a07d6       registry.k8s.io/ingress-nginx/controller@sha256:1b044f6dcac3afbb59e05d98463f1dec6f3d3fb99940bc12ca5d80270358e3bd             3 minutes ago       Running             controller                0                   fbebd578d37fc       ingress-nginx-controller-675c5ddd98-8fm8x
	5966eef3a473b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39   4 minutes ago       Exited              patch                     0                   038f3ad417f7d       ingress-nginx-admission-patch-x52f8
	c723ffbb30e65       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:3d671cf20a35cd94efc5dcd484970779eb21e7938c98fbc3673693b8a117cf39   4 minutes ago       Exited              create                    0                   aa7709f3fd4dd       ingress-nginx-admission-create-jxdt4
	0042304deba7b       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb            4 minutes ago       Running             gadget                    0                   02d520347d156       gadget-gv7nr
	df886be9fafbf       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               4 minutes ago       Running             minikube-ingress-dns      0                   2ce0d8a6fdd10       kube-ingress-dns-minikube
	8e68896ea450e       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   e1a7671351bd0       amd-gpu-device-plugin-wx8s2
	85b250df1323c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   d6bdbbda4b8ee       storage-provisioner
	0410edc0c61ce       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             5 minutes ago       Running             coredns                   0                   3190eac37d4b6       coredns-66bc5c9577-sjgmx
	86baff1e8fbed       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                             5 minutes ago       Running             kube-proxy                0                   848ef4cde81cd       kube-proxy-d6zrs
	29b52dfbf87aa       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             5 minutes ago       Running             etcd                      0                   a0e49d4d3f460       etcd-addons-468489
	ff69c00f6f21b       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                             5 minutes ago       Running             kube-scheduler            0                   43e34f9c901bf       kube-scheduler-addons-468489
	0b86f33110d8c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                             5 minutes ago       Running             kube-controller-manager   0                   27bf80994e081       kube-controller-manager-addons-468489
	ee77fcda0b4b1       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                             5 minutes ago       Running             kube-apiserver            0                   a0ba435c55f8d       kube-apiserver-addons-468489
	
	
	==> coredns [0410edc0c61ce504b45bbb3f18b0f22536d22712f7391c14e439c94e84304edb] <==
	[INFO] 10.244.0.8:53217 - 52968 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.001296341s
	[INFO] 10.244.0.8:53217 - 30963 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000150071s
	[INFO] 10.244.0.8:53217 - 62719 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000212243s
	[INFO] 10.244.0.8:53217 - 32338 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000414556s
	[INFO] 10.244.0.8:53217 - 24575 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000173934s
	[INFO] 10.244.0.8:53217 - 43313 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.00010189s
	[INFO] 10.244.0.8:53217 - 31640 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000208794s
	[INFO] 10.244.0.8:51859 - 30571 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00021214s
	[INFO] 10.244.0.8:51859 - 30236 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000274982s
	[INFO] 10.244.0.8:54840 - 56669 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000123094s
	[INFO] 10.244.0.8:54840 - 56427 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000085391s
	[INFO] 10.244.0.8:35736 - 19956 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000104725s
	[INFO] 10.244.0.8:35736 - 19463 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000147811s
	[INFO] 10.244.0.8:38204 - 37170 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000118981s
	[INFO] 10.244.0.8:38204 - 36992 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000160432s
	[INFO] 10.244.0.23:50070 - 19806 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00087431s
	[INFO] 10.244.0.23:58901 - 33904 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000216667s
	[INFO] 10.244.0.23:57004 - 57995 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000111606s
	[INFO] 10.244.0.23:48375 - 37878 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000120679s
	[INFO] 10.244.0.23:49540 - 50340 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000139635s
	[INFO] 10.244.0.23:39694 - 2307 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000088413s
	[INFO] 10.244.0.23:52831 - 64898 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001332897s
	[INFO] 10.244.0.23:37033 - 25300 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001203329s
	[INFO] 10.244.0.26:43125 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000392449s
	[INFO] 10.244.0.26:40324 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000157102s
	
	
	==> describe nodes <==
	Name:               addons-468489
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-468489
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
	                    minikube.k8s.io/name=addons-468489
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T08_29_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-468489
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 08:29:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-468489
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 08:35:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 08:33:01 +0000   Sat, 01 Nov 2025 08:29:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 08:33:01 +0000   Sat, 01 Nov 2025 08:29:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 08:33:01 +0000   Sat, 01 Nov 2025 08:29:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 08:33:01 +0000   Sat, 01 Nov 2025 08:29:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.108
	  Hostname:    addons-468489
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001784Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001784Ki
	  pods:               110
	System Info:
	  Machine ID:                 839602306f48496481c1c1246eb542bd
	  System UUID:                83960230-6f48-4964-81c1-c1246eb542bd
	  Boot ID:                    80856773-1675-4201-abf1-d791538d2349
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m4s
	  default                     hello-world-app-5d498dc89-5x257              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  gadget                      gadget-gv7nr                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-8fm8x    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m56s
	  kube-system                 amd-gpu-device-plugin-wx8s2                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  kube-system                 coredns-66bc5c9577-sjgmx                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     5m4s
	  kube-system                 etcd-addons-468489                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m11s
	  kube-system                 kube-apiserver-addons-468489                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m10s
	  kube-system                 kube-controller-manager-addons-468489        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m10s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	  kube-system                 kube-proxy-d6zrs                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 kube-scheduler-addons-468489                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m10s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             260Mi (6%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m2s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  5m16s (x8 over 5m16s)  kubelet          Node addons-468489 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m16s (x8 over 5m16s)  kubelet          Node addons-468489 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m16s (x7 over 5m16s)  kubelet          Node addons-468489 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m10s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m10s                  kubelet          Node addons-468489 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m10s                  kubelet          Node addons-468489 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m10s                  kubelet          Node addons-468489 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m9s                   kubelet          Node addons-468489 status is now: NodeReady
	  Normal  RegisteredNode           5m5s                   node-controller  Node addons-468489 event: Registered Node addons-468489 in Controller
	
	
	==> dmesg <==
	[  +4.167563] kauditd_printk_skb: 371 callbacks suppressed
	[  +6.347083] kauditd_printk_skb: 5 callbacks suppressed
	[ +10.002164] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.133693] kauditd_printk_skb: 32 callbacks suppressed
	[ +10.261342] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.193488] kauditd_printk_skb: 11 callbacks suppressed
	[Nov 1 08:31] kauditd_printk_skb: 131 callbacks suppressed
	[  +4.902464] kauditd_printk_skb: 111 callbacks suppressed
	[  +3.423825] kauditd_printk_skb: 105 callbacks suppressed
	[  +0.195096] kauditd_printk_skb: 74 callbacks suppressed
	[  +4.560016] kauditd_printk_skb: 32 callbacks suppressed
	[  +8.087978] kauditd_printk_skb: 17 callbacks suppressed
	[Nov 1 08:32] kauditd_printk_skb: 2 callbacks suppressed
	[ +13.017590] kauditd_printk_skb: 41 callbacks suppressed
	[  +6.067331] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.375927] kauditd_printk_skb: 38 callbacks suppressed
	[  +2.179313] kauditd_printk_skb: 105 callbacks suppressed
	[  +0.000545] kauditd_printk_skb: 179 callbacks suppressed
	[  +3.908041] kauditd_printk_skb: 113 callbacks suppressed
	[  +2.395653] kauditd_printk_skb: 112 callbacks suppressed
	[Nov 1 08:33] kauditd_printk_skb: 57 callbacks suppressed
	[  +0.000024] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.084125] kauditd_printk_skb: 41 callbacks suppressed
	[  +0.514037] kauditd_printk_skb: 130 callbacks suppressed
	[Nov 1 08:35] kauditd_printk_skb: 7 callbacks suppressed
	
	
	==> etcd [29b52dfbf87aab4d25a48bfefa8ee19291d8e4de1116564600b891281e618d92] <==
	{"level":"warn","ts":"2025-11-01T08:31:19.699735Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"115.565137ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T08:31:19.699771Z","caller":"traceutil/trace.go:172","msg":"trace[503839985] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1166; }","duration":"115.593687ms","start":"2025-11-01T08:31:19.584155Z","end":"2025-11-01T08:31:19.699749Z","steps":["trace[503839985] 'agreement among raft nodes before linearized reading'  (duration: 115.539752ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:31:22.994799Z","caller":"traceutil/trace.go:172","msg":"trace[345948826] transaction","detail":"{read_only:false; response_revision:1172; number_of_response:1; }","duration":"240.685188ms","start":"2025-11-01T08:31:22.754101Z","end":"2025-11-01T08:31:22.994786Z","steps":["trace[345948826] 'process raft request'  (duration: 240.567422ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T08:31:24.043194Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.583924ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T08:31:24.043245Z","caller":"traceutil/trace.go:172","msg":"trace[648144560] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1173; }","duration":"132.654828ms","start":"2025-11-01T08:31:23.910580Z","end":"2025-11-01T08:31:24.043235Z","steps":["trace[648144560] 'range keys from in-memory index tree'  (duration: 132.488062ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T08:31:24.043933Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"108.829538ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/flowschemas\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T08:31:24.044030Z","caller":"traceutil/trace.go:172","msg":"trace[1821756979] range","detail":"{range_begin:/registry/flowschemas; range_end:; response_count:0; response_revision:1173; }","duration":"108.935277ms","start":"2025-11-01T08:31:23.935086Z","end":"2025-11-01T08:31:24.044022Z","steps":["trace[1821756979] 'range keys from in-memory index tree'  (duration: 108.373344ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:31:59.494398Z","caller":"traceutil/trace.go:172","msg":"trace[1297873927] linearizableReadLoop","detail":"{readStateIndex:1312; appliedIndex:1312; }","duration":"254.981682ms","start":"2025-11-01T08:31:59.239394Z","end":"2025-11-01T08:31:59.494376Z","steps":["trace[1297873927] 'read index received'  (duration: 254.942901ms)","trace[1297873927] 'applied index is now lower than readState.Index'  (duration: 37.811µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T08:31:59.494561Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"255.182709ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/prioritylevelconfigurations\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T08:31:59.494593Z","caller":"traceutil/trace.go:172","msg":"trace[1033651856] range","detail":"{range_begin:/registry/prioritylevelconfigurations; range_end:; response_count:0; response_revision:1268; }","duration":"255.249976ms","start":"2025-11-01T08:31:59.239335Z","end":"2025-11-01T08:31:59.494585Z","steps":["trace[1033651856] 'agreement among raft nodes before linearized reading'  (duration: 255.154647ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:31:59.494582Z","caller":"traceutil/trace.go:172","msg":"trace[1134202419] transaction","detail":"{read_only:false; response_revision:1269; number_of_response:1; }","duration":"285.46888ms","start":"2025-11-01T08:31:59.209101Z","end":"2025-11-01T08:31:59.494570Z","steps":["trace[1134202419] 'process raft request'  (duration: 285.32125ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:32:29.891161Z","caller":"traceutil/trace.go:172","msg":"trace[427803549] transaction","detail":"{read_only:false; response_revision:1435; number_of_response:1; }","duration":"169.610725ms","start":"2025-11-01T08:32:29.721506Z","end":"2025-11-01T08:32:29.891117Z","steps":["trace[427803549] 'process raft request'  (duration: 169.479164ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T08:32:30.138883Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"166.897792ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T08:32:30.138948Z","caller":"traceutil/trace.go:172","msg":"trace[206656571] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1435; }","duration":"166.966997ms","start":"2025-11-01T08:32:29.971970Z","end":"2025-11-01T08:32:30.138937Z","steps":["trace[206656571] 'range keys from in-memory index tree'  (duration: 166.814919ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T08:32:30.139143Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"103.7919ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" limit:1 ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2025-11-01T08:32:30.139165Z","caller":"traceutil/trace.go:172","msg":"trace[668058850] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1435; }","duration":"103.818543ms","start":"2025-11-01T08:32:30.035340Z","end":"2025-11-01T08:32:30.139159Z","steps":["trace[668058850] 'range keys from in-memory index tree'  (duration: 103.578379ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:33:14.307184Z","caller":"traceutil/trace.go:172","msg":"trace[1945716621] linearizableReadLoop","detail":"{readStateIndex:1816; appliedIndex:1816; }","duration":"291.360612ms","start":"2025-11-01T08:33:14.015763Z","end":"2025-11-01T08:33:14.307124Z","steps":["trace[1945716621] 'read index received'  (duration: 291.349008ms)","trace[1945716621] 'applied index is now lower than readState.Index'  (duration: 6.911µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T08:33:14.307513Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"291.713681ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc-restore\" limit:1 ","response":"range_response_count:1 size:982"}
	{"level":"info","ts":"2025-11-01T08:33:14.307540Z","caller":"traceutil/trace.go:172","msg":"trace[480338789] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc-restore; range_end:; response_count:1; response_revision:1744; }","duration":"291.773714ms","start":"2025-11-01T08:33:14.015759Z","end":"2025-11-01T08:33:14.307533Z","steps":["trace[480338789] 'agreement among raft nodes before linearized reading'  (duration: 291.568212ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T08:33:14.308220Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"152.438339ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshots/default/new-snapshot-demo\" limit:1 ","response":"range_response_count:1 size:1698"}
	{"level":"info","ts":"2025-11-01T08:33:14.308272Z","caller":"traceutil/trace.go:172","msg":"trace[100684495] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshots/default/new-snapshot-demo; range_end:; response_count:1; response_revision:1745; }","duration":"152.499891ms","start":"2025-11-01T08:33:14.155763Z","end":"2025-11-01T08:33:14.308263Z","steps":["trace[100684495] 'agreement among raft nodes before linearized reading'  (duration: 152.384522ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T08:33:14.308729Z","caller":"traceutil/trace.go:172","msg":"trace[1502752599] transaction","detail":"{read_only:false; response_revision:1745; number_of_response:1; }","duration":"350.142852ms","start":"2025-11-01T08:33:13.958572Z","end":"2025-11-01T08:33:14.308715Z","steps":["trace[1502752599] 'process raft request'  (duration: 348.630084ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T08:33:14.309342Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-01T08:33:13.958552Z","time spent":"350.286255ms","remote":"127.0.0.1:50332","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":539,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1737 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:452 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"warn","ts":"2025-11-01T08:33:14.310683Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"139.169282ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2025-11-01T08:33:14.310840Z","caller":"traceutil/trace.go:172","msg":"trace[927265233] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1745; }","duration":"140.046605ms","start":"2025-11-01T08:33:14.170740Z","end":"2025-11-01T08:33:14.310786Z","steps":["trace[927265233] 'agreement among raft nodes before linearized reading'  (duration: 138.730788ms)"],"step_count":1}
	
	
	==> kernel <==
	 08:35:07 up 5 min,  0 users,  load average: 0.54, 1.19, 0.64
	Linux addons-468489 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Oct 28 16:58:05 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [ee77fcda0b4b15a8180e149cb83f9d4052c608c26842ed0e17df21e75e99285a] <==
	E1101 08:30:44.904270       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.231.114:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.231.114:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.231.114:443: connect: connection refused" logger="UnhandledError"
	E1101 08:30:44.908432       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.231.114:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.231.114:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.231.114:443: connect: connection refused" logger="UnhandledError"
	I1101 08:30:44.983978       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1101 08:32:13.481602       1 conn.go:339] Error on socket receive: read tcp 192.168.39.108:8443->192.168.39.1:49294: use of closed network connection
	E1101 08:32:13.675594       1 conn.go:339] Error on socket receive: read tcp 192.168.39.108:8443->192.168.39.1:49334: use of closed network connection
	I1101 08:32:22.879757       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.181.126"}
	I1101 08:32:40.508929       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1101 08:32:40.702266       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.132.59"}
	I1101 08:32:45.923404       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1101 08:33:02.265787       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1101 08:33:06.689523       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1101 08:33:24.612706       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1101 08:33:24.612785       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1101 08:33:24.663052       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1101 08:33:24.663116       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1101 08:33:24.668030       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1101 08:33:24.668087       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1101 08:33:24.732712       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1101 08:33:24.732819       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1101 08:33:24.817186       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1101 08:33:24.817229       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1101 08:33:25.668384       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1101 08:33:25.818268       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1101 08:33:25.846076       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1101 08:35:06.036671       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.54.88"}
	
	
	==> kube-controller-manager [0b86f33110d8c4f52aa6c4337dccf801e3577bfad86423097372aa5d5887c14b] <==
	E1101 08:33:32.391509       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I1101 08:33:33.355260       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1101 08:33:33.355414       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1101 08:33:33.505239       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 08:33:33.507051       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 08:33:35.293136       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 08:33:35.294135       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 08:33:40.024875       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 08:33:40.025933       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 08:33:45.601954       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 08:33:45.603253       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 08:33:46.784125       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 08:33:46.785602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 08:33:57.853519       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 08:33:57.854523       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 08:34:05.950610       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 08:34:05.952184       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 08:34:06.199633       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 08:34:06.200594       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 08:34:32.625917       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 08:34:32.626988       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 08:34:33.915013       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 08:34:33.916495       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E1101 08:34:49.513756       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E1101 08:34:49.514881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [86baff1e8fbedaeb7a8e43e0536bd9a60d4f37c3aa20f5c70c599ad6b4d6bc3c] <==
	I1101 08:30:04.379607       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 08:30:04.484250       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 08:30:04.489669       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.108"]
	E1101 08:30:04.491046       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 08:30:04.711753       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1101 08:30:04.712094       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 08:30:04.712128       1 server_linux.go:132] "Using iptables Proxier"
	I1101 08:30:04.749031       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 08:30:04.750676       1 server.go:527] "Version info" version="v1.34.1"
	I1101 08:30:04.750780       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 08:30:04.789116       1 config.go:200] "Starting service config controller"
	I1101 08:30:04.887480       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 08:30:04.798829       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 08:30:04.887506       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 08:30:04.809107       1 config.go:309] "Starting node config controller"
	I1101 08:30:04.887513       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 08:30:04.887518       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 08:30:04.798814       1 config.go:106] "Starting endpoint slice config controller"
	I1101 08:30:04.914825       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 08:30:04.914837       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 08:30:04.914927       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 08:30:04.950616       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [ff69c00f6f21b264dd70ef4d32dceab07331462a53d1976d7851c9187893a8b8] <==
	E1101 08:29:55.155934       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 08:29:55.156453       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 08:29:55.156546       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 08:29:55.156639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 08:29:55.156743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 08:29:55.156934       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 08:29:55.157981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 08:29:55.993062       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1101 08:29:55.993972       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 08:29:56.064386       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 08:29:56.089171       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 08:29:56.120020       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 08:29:56.150183       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 08:29:56.157399       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 08:29:56.196761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 08:29:56.204599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 08:29:56.250551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 08:29:56.276897       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 08:29:56.286300       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 08:29:56.307935       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 08:29:56.313939       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 08:29:56.329734       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 08:29:56.372416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 08:29:56.443907       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1101 08:29:59.137411       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 08:33:27 addons-468489 kubelet[1503]: E1101 08:33:27.864747    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761986007864261615  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:589266}  inodes_used:{value:201}}"
	Nov 01 08:33:27 addons-468489 kubelet[1503]: E1101 08:33:27.864768    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761986007864261615  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:589266}  inodes_used:{value:201}}"
	Nov 01 08:33:37 addons-468489 kubelet[1503]: E1101 08:33:37.868468    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761986017867919102  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:589266}  inodes_used:{value:201}}"
	Nov 01 08:33:37 addons-468489 kubelet[1503]: E1101 08:33:37.868491    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761986017867919102  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:589266}  inodes_used:{value:201}}"
	Nov 01 08:33:47 addons-468489 kubelet[1503]: E1101 08:33:47.871611    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761986027870696856  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:589266}  inodes_used:{value:201}}"
	Nov 01 08:33:47 addons-468489 kubelet[1503]: E1101 08:33:47.871721    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761986027870696856  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:589266}  inodes_used:{value:201}}"
	Nov 01 08:33:50 addons-468489 kubelet[1503]: I1101 08:33:50.702165    1503 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-wx8s2" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 08:33:57 addons-468489 kubelet[1503]: E1101 08:33:57.875474    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761986037875012928  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:589266}  inodes_used:{value:201}}"
	Nov 01 08:33:57 addons-468489 kubelet[1503]: E1101 08:33:57.875503    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761986037875012928  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:589266}  inodes_used:{value:201}}"
	Nov 01 08:34:00 addons-468489 kubelet[1503]: I1101 08:34:00.825888    1503 scope.go:117] "RemoveContainer" containerID="c661ec10bb22123253e40ccaedcab1d71525f402c2aaa51013388b56677a457f"
	Nov 01 08:34:07 addons-468489 kubelet[1503]: E1101 08:34:07.877889    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761986047877611034  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:589266}  inodes_used:{value:201}}"
	Nov 01 08:34:07 addons-468489 kubelet[1503]: E1101 08:34:07.877927    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761986047877611034  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:589266}  inodes_used:{value:201}}"
	Nov 01 08:34:17 addons-468489 kubelet[1503]: E1101 08:34:17.880433    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761986057880012589  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:589266}  inodes_used:{value:201}}"
	Nov 01 08:34:17 addons-468489 kubelet[1503]: E1101 08:34:17.880716    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761986057880012589  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:589266}  inodes_used:{value:201}}"
	Nov 01 08:34:25 addons-468489 kubelet[1503]: I1101 08:34:25.707458    1503 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 08:34:27 addons-468489 kubelet[1503]: E1101 08:34:27.886750    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761986067883729634  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:589266}  inodes_used:{value:201}}"
	Nov 01 08:34:27 addons-468489 kubelet[1503]: E1101 08:34:27.887040    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761986067883729634  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:589266}  inodes_used:{value:201}}"
	Nov 01 08:34:37 addons-468489 kubelet[1503]: E1101 08:34:37.889714    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761986077889303130  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:589266}  inodes_used:{value:201}}"
	Nov 01 08:34:37 addons-468489 kubelet[1503]: E1101 08:34:37.889741    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761986077889303130  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:589266}  inodes_used:{value:201}}"
	Nov 01 08:34:47 addons-468489 kubelet[1503]: E1101 08:34:47.894155    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761986087893064022  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:589266}  inodes_used:{value:201}}"
	Nov 01 08:34:47 addons-468489 kubelet[1503]: E1101 08:34:47.894231    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761986087893064022  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:589266}  inodes_used:{value:201}}"
	Nov 01 08:34:57 addons-468489 kubelet[1503]: E1101 08:34:57.898114    1503 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761986097896509378  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:589266}  inodes_used:{value:201}}"
	Nov 01 08:34:57 addons-468489 kubelet[1503]: E1101 08:34:57.898168    1503 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761986097896509378  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:589266}  inodes_used:{value:201}}"
	Nov 01 08:34:58 addons-468489 kubelet[1503]: I1101 08:34:58.702933    1503 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-wx8s2" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 08:35:06 addons-468489 kubelet[1503]: I1101 08:35:06.107834    1503 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sd28\" (UniqueName: \"kubernetes.io/projected/093032c3-57ab-46d6-9c77-d68ca1ac57fb-kube-api-access-7sd28\") pod \"hello-world-app-5d498dc89-5x257\" (UID: \"093032c3-57ab-46d6-9c77-d68ca1ac57fb\") " pod="default/hello-world-app-5d498dc89-5x257"
	
	
	==> storage-provisioner [85b250df1323cf03f8796cb8581666b4a6c3e33180cbaa4d3112d86c9d3da69d] <==
	W1101 08:34:42.820670       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:34:44.824071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:34:44.828706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:34:46.832184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:34:46.840998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:34:48.844054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:34:48.849584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:34:50.852819       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:34:50.862188       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:34:52.866817       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:34:52.871621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:34:54.875011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:34:54.883325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:34:56.887527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:34:56.892996       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:34:58.896146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:34:58.902028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:35:00.905131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:35:00.910451       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:35:02.914223       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:35:02.919418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:35:04.923605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:35:04.930627       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:35:06.936297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:35:06.945235       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-468489 -n addons-468489
helpers_test.go:269: (dbg) Run:  kubectl --context addons-468489 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-5x257 ingress-nginx-admission-create-jxdt4 ingress-nginx-admission-patch-x52f8
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-468489 describe pod hello-world-app-5d498dc89-5x257 ingress-nginx-admission-create-jxdt4 ingress-nginx-admission-patch-x52f8
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-468489 describe pod hello-world-app-5d498dc89-5x257 ingress-nginx-admission-create-jxdt4 ingress-nginx-admission-patch-x52f8: exit status 1 (76.563955ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-5x257
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-468489/192.168.39.108
	Start Time:       Sat, 01 Nov 2025 08:35:05 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7sd28 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-7sd28:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-5x257 to addons-468489
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-jxdt4" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-x52f8" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-468489 describe pod hello-world-app-5d498dc89-5x257 ingress-nginx-admission-create-jxdt4 ingress-nginx-admission-patch-x52f8: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-468489 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-468489 addons disable ingress-dns --alsologtostderr -v=1: (1.097774064s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-468489 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-468489 addons disable ingress --alsologtostderr -v=1: (7.73839298s)
--- FAIL: TestAddons/parallel/Ingress (157.05s)

                                                
                                    
x
+
TestPreload (151.48s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-168376 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0
E1101 09:21:41.645886    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/functional-146919/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:22:03.360590    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-168376 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0: (1m30.338757463s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-168376 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-168376 image pull gcr.io/k8s-minikube/busybox: (3.54831447s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-168376
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-168376: (6.829564361s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-168376 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-168376 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (47.968045189s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-168376 image list
preload_test.go:75: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.10
	registry.k8s.io/kube-scheduler:v1.32.0
	registry.k8s.io/kube-proxy:v1.32.0
	registry.k8s.io/kube-controller-manager:v1.32.0
	registry.k8s.io/kube-apiserver:v1.32.0
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20241108-5c6d2daf

                                                
                                                
-- /stdout --
panic.go:636: *** TestPreload FAILED at 2025-11-01 09:23:41.241520041 +0000 UTC m=+3273.659217142
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPreload]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-168376 -n test-preload-168376
helpers_test.go:252: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-168376 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p test-preload-168376 logs -n 25: (1.073959972s)
helpers_test.go:260: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ multinode-033193 ssh -n multinode-033193-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-033193     │ jenkins │ v1.37.0 │ 01 Nov 25 09:09 UTC │ 01 Nov 25 09:09 UTC │
	│ ssh     │ multinode-033193 ssh -n multinode-033193 sudo cat /home/docker/cp-test_multinode-033193-m03_multinode-033193.txt                                          │ multinode-033193     │ jenkins │ v1.37.0 │ 01 Nov 25 09:09 UTC │ 01 Nov 25 09:09 UTC │
	│ cp      │ multinode-033193 cp multinode-033193-m03:/home/docker/cp-test.txt multinode-033193-m02:/home/docker/cp-test_multinode-033193-m03_multinode-033193-m02.txt │ multinode-033193     │ jenkins │ v1.37.0 │ 01 Nov 25 09:09 UTC │ 01 Nov 25 09:09 UTC │
	│ ssh     │ multinode-033193 ssh -n multinode-033193-m03 sudo cat /home/docker/cp-test.txt                                                                            │ multinode-033193     │ jenkins │ v1.37.0 │ 01 Nov 25 09:09 UTC │ 01 Nov 25 09:09 UTC │
	│ ssh     │ multinode-033193 ssh -n multinode-033193-m02 sudo cat /home/docker/cp-test_multinode-033193-m03_multinode-033193-m02.txt                                  │ multinode-033193     │ jenkins │ v1.37.0 │ 01 Nov 25 09:09 UTC │ 01 Nov 25 09:09 UTC │
	│ node    │ multinode-033193 node stop m03                                                                                                                            │ multinode-033193     │ jenkins │ v1.37.0 │ 01 Nov 25 09:09 UTC │ 01 Nov 25 09:09 UTC │
	│ node    │ multinode-033193 node start m03 -v=5 --alsologtostderr                                                                                                    │ multinode-033193     │ jenkins │ v1.37.0 │ 01 Nov 25 09:09 UTC │ 01 Nov 25 09:10 UTC │
	│ node    │ list -p multinode-033193                                                                                                                                  │ multinode-033193     │ jenkins │ v1.37.0 │ 01 Nov 25 09:10 UTC │                     │
	│ stop    │ -p multinode-033193                                                                                                                                       │ multinode-033193     │ jenkins │ v1.37.0 │ 01 Nov 25 09:10 UTC │ 01 Nov 25 09:13 UTC │
	│ start   │ -p multinode-033193 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-033193     │ jenkins │ v1.37.0 │ 01 Nov 25 09:13 UTC │ 01 Nov 25 09:15 UTC │
	│ node    │ list -p multinode-033193                                                                                                                                  │ multinode-033193     │ jenkins │ v1.37.0 │ 01 Nov 25 09:15 UTC │                     │
	│ node    │ multinode-033193 node delete m03                                                                                                                          │ multinode-033193     │ jenkins │ v1.37.0 │ 01 Nov 25 09:15 UTC │ 01 Nov 25 09:15 UTC │
	│ stop    │ multinode-033193 stop                                                                                                                                     │ multinode-033193     │ jenkins │ v1.37.0 │ 01 Nov 25 09:15 UTC │ 01 Nov 25 09:18 UTC │
	│ start   │ -p multinode-033193 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio                                                            │ multinode-033193     │ jenkins │ v1.37.0 │ 01 Nov 25 09:18 UTC │ 01 Nov 25 09:20 UTC │
	│ node    │ list -p multinode-033193                                                                                                                                  │ multinode-033193     │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │                     │
	│ start   │ -p multinode-033193-m02 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-033193-m02 │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │                     │
	│ start   │ -p multinode-033193-m03 --driver=kvm2  --container-runtime=crio                                                                                           │ multinode-033193-m03 │ jenkins │ v1.37.0 │ 01 Nov 25 09:20 UTC │ 01 Nov 25 09:21 UTC │
	│ node    │ add -p multinode-033193                                                                                                                                   │ multinode-033193     │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │                     │
	│ delete  │ -p multinode-033193-m03                                                                                                                                   │ multinode-033193-m03 │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ delete  │ -p multinode-033193                                                                                                                                       │ multinode-033193     │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:21 UTC │
	│ start   │ -p test-preload-168376 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.32.0   │ test-preload-168376  │ jenkins │ v1.37.0 │ 01 Nov 25 09:21 UTC │ 01 Nov 25 09:22 UTC │
	│ image   │ test-preload-168376 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-168376  │ jenkins │ v1.37.0 │ 01 Nov 25 09:22 UTC │ 01 Nov 25 09:22 UTC │
	│ stop    │ -p test-preload-168376                                                                                                                                    │ test-preload-168376  │ jenkins │ v1.37.0 │ 01 Nov 25 09:22 UTC │ 01 Nov 25 09:22 UTC │
	│ start   │ -p test-preload-168376 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio                                           │ test-preload-168376  │ jenkins │ v1.37.0 │ 01 Nov 25 09:22 UTC │ 01 Nov 25 09:23 UTC │
	│ image   │ test-preload-168376 image list                                                                                                                            │ test-preload-168376  │ jenkins │ v1.37.0 │ 01 Nov 25 09:23 UTC │ 01 Nov 25 09:23 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:22:53
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:22:53.130669   33346 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:22:53.130963   33346 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:22:53.130974   33346 out.go:374] Setting ErrFile to fd 2...
	I1101 09:22:53.130981   33346 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:22:53.131202   33346 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5912/.minikube/bin
	I1101 09:22:53.131655   33346 out.go:368] Setting JSON to false
	I1101 09:22:53.132515   33346 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":3920,"bootTime":1761985053,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:22:53.132600   33346 start.go:143] virtualization: kvm guest
	I1101 09:22:53.134764   33346 out.go:179] * [test-preload-168376] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:22:53.136376   33346 notify.go:221] Checking for updates...
	I1101 09:22:53.136417   33346 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 09:22:53.137743   33346 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:22:53.139070   33346 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-5912/kubeconfig
	I1101 09:22:53.140294   33346 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5912/.minikube
	I1101 09:22:53.141552   33346 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:22:53.142762   33346 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:22:53.144471   33346 config.go:182] Loaded profile config "test-preload-168376": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1101 09:22:53.146336   33346 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1101 09:22:53.147676   33346 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:22:53.181918   33346 out.go:179] * Using the kvm2 driver based on existing profile
	I1101 09:22:53.183357   33346 start.go:309] selected driver: kvm2
	I1101 09:22:53.183371   33346 start.go:930] validating driver "kvm2" against &{Name:test-preload-168376 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.32.0 ClusterName:test-preload-168376 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:22:53.183485   33346 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:22:53.184668   33346 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:22:53.184700   33346 cni.go:84] Creating CNI manager for ""
	I1101 09:22:53.184760   33346 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 09:22:53.184821   33346 start.go:353] cluster config:
	{Name:test-preload-168376 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:test-preload-168376 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:22:53.184940   33346 iso.go:125] acquiring lock: {Name:mk345092679db7c379cbaa00125c4f18e2b4a125 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:22:53.187249   33346 out.go:179] * Starting "test-preload-168376" primary control-plane node in "test-preload-168376" cluster
	I1101 09:22:53.188541   33346 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1101 09:22:53.204816   33346 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1101 09:22:53.204853   33346 cache.go:59] Caching tarball of preloaded images
	I1101 09:22:53.204984   33346 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1101 09:22:53.206906   33346 out.go:179] * Downloading Kubernetes v1.32.0 preload ...
	I1101 09:22:53.208257   33346 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1101 09:22:53.237994   33346 preload.go:290] Got checksum from GCS API "2acdb4dde52794f2167c79dcee7507ae"
	I1101 09:22:53.238036   33346 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.0/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:2acdb4dde52794f2167c79dcee7507ae -> /home/jenkins/minikube-integration/21835-5912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4
	I1101 09:22:56.711649   33346 cache.go:62] Finished verifying existence of preloaded tar for v1.32.0 on crio
	I1101 09:22:56.711786   33346 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/test-preload-168376/config.json ...
	I1101 09:22:56.712012   33346 start.go:360] acquireMachinesLock for test-preload-168376: {Name:mk8049b4e421873947dfa0bcd96201ccb1e1825c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 09:22:56.712073   33346 start.go:364] duration metric: took 39.495µs to acquireMachinesLock for "test-preload-168376"
	I1101 09:22:56.712086   33346 start.go:96] Skipping create...Using existing machine configuration
	I1101 09:22:56.712092   33346 fix.go:54] fixHost starting: 
	I1101 09:22:56.713886   33346 fix.go:112] recreateIfNeeded on test-preload-168376: state=Stopped err=<nil>
	W1101 09:22:56.713908   33346 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 09:22:56.715591   33346 out.go:252] * Restarting existing kvm2 VM for "test-preload-168376" ...
	I1101 09:22:56.715618   33346 main.go:143] libmachine: starting domain...
	I1101 09:22:56.715636   33346 main.go:143] libmachine: ensuring networks are active...
	I1101 09:22:56.716371   33346 main.go:143] libmachine: Ensuring network default is active
	I1101 09:22:56.716716   33346 main.go:143] libmachine: Ensuring network mk-test-preload-168376 is active
	I1101 09:22:56.717060   33346 main.go:143] libmachine: getting domain XML...
	I1101 09:22:56.718227   33346 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>test-preload-168376</name>
	  <uuid>ef1c7fca-b6ff-4b5b-9eb4-8f4490239c3d</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21835-5912/.minikube/machines/test-preload-168376/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21835-5912/.minikube/machines/test-preload-168376/test-preload-168376.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:2e:6e:26'/>
	      <source network='mk-test-preload-168376'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:27:65:c5'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1101 09:22:57.991575   33346 main.go:143] libmachine: waiting for domain to start...
	I1101 09:22:57.993220   33346 main.go:143] libmachine: domain is now running
	I1101 09:22:57.993256   33346 main.go:143] libmachine: waiting for IP...
	I1101 09:22:57.994059   33346 main.go:143] libmachine: domain test-preload-168376 has defined MAC address 52:54:00:2e:6e:26 in network mk-test-preload-168376
	I1101 09:22:57.994728   33346 main.go:143] libmachine: domain test-preload-168376 has current primary IP address 192.168.39.170 and MAC address 52:54:00:2e:6e:26 in network mk-test-preload-168376
	I1101 09:22:57.994741   33346 main.go:143] libmachine: found domain IP: 192.168.39.170
	I1101 09:22:57.994747   33346 main.go:143] libmachine: reserving static IP address...
	I1101 09:22:57.995182   33346 main.go:143] libmachine: found host DHCP lease matching {name: "test-preload-168376", mac: "52:54:00:2e:6e:26", ip: "192.168.39.170"} in network mk-test-preload-168376: {Iface:virbr1 ExpiryTime:2025-11-01 10:21:27 +0000 UTC Type:0 Mac:52:54:00:2e:6e:26 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:test-preload-168376 Clientid:01:52:54:00:2e:6e:26}
	I1101 09:22:57.995234   33346 main.go:143] libmachine: skip adding static IP to network mk-test-preload-168376 - found existing host DHCP lease matching {name: "test-preload-168376", mac: "52:54:00:2e:6e:26", ip: "192.168.39.170"}
	I1101 09:22:57.995243   33346 main.go:143] libmachine: reserved static IP address 192.168.39.170 for domain test-preload-168376
	I1101 09:22:57.995255   33346 main.go:143] libmachine: waiting for SSH...
	I1101 09:22:57.995260   33346 main.go:143] libmachine: Getting to WaitForSSH function...
	I1101 09:22:57.997613   33346 main.go:143] libmachine: domain test-preload-168376 has defined MAC address 52:54:00:2e:6e:26 in network mk-test-preload-168376
	I1101 09:22:57.997976   33346 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2e:6e:26", ip: ""} in network mk-test-preload-168376: {Iface:virbr1 ExpiryTime:2025-11-01 10:21:27 +0000 UTC Type:0 Mac:52:54:00:2e:6e:26 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:test-preload-168376 Clientid:01:52:54:00:2e:6e:26}
	I1101 09:22:57.997996   33346 main.go:143] libmachine: domain test-preload-168376 has defined IP address 192.168.39.170 and MAC address 52:54:00:2e:6e:26 in network mk-test-preload-168376
	I1101 09:22:57.998181   33346 main.go:143] libmachine: Using SSH client type: native
	I1101 09:22:57.998405   33346 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I1101 09:22:57.998415   33346 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1101 09:23:01.082637   33346 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.170:22: connect: no route to host
	I1101 09:23:07.162609   33346 main.go:143] libmachine: Error dialing TCP: dial tcp 192.168.39.170:22: connect: no route to host
	I1101 09:23:10.287934   33346 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:23:10.291521   33346 main.go:143] libmachine: domain test-preload-168376 has defined MAC address 52:54:00:2e:6e:26 in network mk-test-preload-168376
	I1101 09:23:10.292027   33346 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2e:6e:26", ip: ""} in network mk-test-preload-168376: {Iface:virbr1 ExpiryTime:2025-11-01 10:23:08 +0000 UTC Type:0 Mac:52:54:00:2e:6e:26 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:test-preload-168376 Clientid:01:52:54:00:2e:6e:26}
	I1101 09:23:10.292058   33346 main.go:143] libmachine: domain test-preload-168376 has defined IP address 192.168.39.170 and MAC address 52:54:00:2e:6e:26 in network mk-test-preload-168376
	I1101 09:23:10.292398   33346 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/test-preload-168376/config.json ...
	I1101 09:23:10.292694   33346 machine.go:94] provisionDockerMachine start ...
	I1101 09:23:10.295426   33346 main.go:143] libmachine: domain test-preload-168376 has defined MAC address 52:54:00:2e:6e:26 in network mk-test-preload-168376
	I1101 09:23:10.295905   33346 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2e:6e:26", ip: ""} in network mk-test-preload-168376: {Iface:virbr1 ExpiryTime:2025-11-01 10:23:08 +0000 UTC Type:0 Mac:52:54:00:2e:6e:26 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:test-preload-168376 Clientid:01:52:54:00:2e:6e:26}
	I1101 09:23:10.295950   33346 main.go:143] libmachine: domain test-preload-168376 has defined IP address 192.168.39.170 and MAC address 52:54:00:2e:6e:26 in network mk-test-preload-168376
	I1101 09:23:10.296151   33346 main.go:143] libmachine: Using SSH client type: native
	I1101 09:23:10.296436   33346 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I1101 09:23:10.296457   33346 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:23:10.417990   33346 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1101 09:23:10.418037   33346 buildroot.go:166] provisioning hostname "test-preload-168376"
	I1101 09:23:10.421056   33346 main.go:143] libmachine: domain test-preload-168376 has defined MAC address 52:54:00:2e:6e:26 in network mk-test-preload-168376
	I1101 09:23:10.421501   33346 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2e:6e:26", ip: ""} in network mk-test-preload-168376: {Iface:virbr1 ExpiryTime:2025-11-01 10:23:08 +0000 UTC Type:0 Mac:52:54:00:2e:6e:26 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:test-preload-168376 Clientid:01:52:54:00:2e:6e:26}
	I1101 09:23:10.421538   33346 main.go:143] libmachine: domain test-preload-168376 has defined IP address 192.168.39.170 and MAC address 52:54:00:2e:6e:26 in network mk-test-preload-168376
	I1101 09:23:10.421702   33346 main.go:143] libmachine: Using SSH client type: native
	I1101 09:23:10.421927   33346 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I1101 09:23:10.421944   33346 main.go:143] libmachine: About to run SSH command:
	sudo hostname test-preload-168376 && echo "test-preload-168376" | sudo tee /etc/hostname
	I1101 09:23:10.554797   33346 main.go:143] libmachine: SSH cmd err, output: <nil>: test-preload-168376
	
	I1101 09:23:10.557716   33346 main.go:143] libmachine: domain test-preload-168376 has defined MAC address 52:54:00:2e:6e:26 in network mk-test-preload-168376
	I1101 09:23:10.558203   33346 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2e:6e:26", ip: ""} in network mk-test-preload-168376: {Iface:virbr1 ExpiryTime:2025-11-01 10:23:08 +0000 UTC Type:0 Mac:52:54:00:2e:6e:26 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:test-preload-168376 Clientid:01:52:54:00:2e:6e:26}
	I1101 09:23:10.558248   33346 main.go:143] libmachine: domain test-preload-168376 has defined IP address 192.168.39.170 and MAC address 52:54:00:2e:6e:26 in network mk-test-preload-168376
	I1101 09:23:10.558458   33346 main.go:143] libmachine: Using SSH client type: native
	I1101 09:23:10.558648   33346 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I1101 09:23:10.558663   33346 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-168376' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-168376/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-168376' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:23:10.684328   33346 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:23:10.684360   33346 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21835-5912/.minikube CaCertPath:/home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21835-5912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21835-5912/.minikube}
	I1101 09:23:10.684411   33346 buildroot.go:174] setting up certificates
	I1101 09:23:10.684429   33346 provision.go:84] configureAuth start
	I1101 09:23:10.687342   33346 main.go:143] libmachine: domain test-preload-168376 has defined MAC address 52:54:00:2e:6e:26 in network mk-test-preload-168376
	I1101 09:23:10.687702   33346 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2e:6e:26", ip: ""} in network mk-test-preload-168376: {Iface:virbr1 ExpiryTime:2025-11-01 10:23:08 +0000 UTC Type:0 Mac:52:54:00:2e:6e:26 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:test-preload-168376 Clientid:01:52:54:00:2e:6e:26}
	I1101 09:23:10.687724   33346 main.go:143] libmachine: domain test-preload-168376 has defined IP address 192.168.39.170 and MAC address 52:54:00:2e:6e:26 in network mk-test-preload-168376
	I1101 09:23:10.689933   33346 main.go:143] libmachine: domain test-preload-168376 has defined MAC address 52:54:00:2e:6e:26 in network mk-test-preload-168376
	I1101 09:23:10.690234   33346 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2e:6e:26", ip: ""} in network mk-test-preload-168376: {Iface:virbr1 ExpiryTime:2025-11-01 10:23:08 +0000 UTC Type:0 Mac:52:54:00:2e:6e:26 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:test-preload-168376 Clientid:01:52:54:00:2e:6e:26}
	I1101 09:23:10.690251   33346 main.go:143] libmachine: domain test-preload-168376 has defined IP address 192.168.39.170 and MAC address 52:54:00:2e:6e:26 in network mk-test-preload-168376
	I1101 09:23:10.690382   33346 provision.go:143] copyHostCerts
	I1101 09:23:10.690439   33346 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5912/.minikube/ca.pem, removing ...
	I1101 09:23:10.690455   33346 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5912/.minikube/ca.pem
	I1101 09:23:10.690528   33346 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21835-5912/.minikube/ca.pem (1082 bytes)
	I1101 09:23:10.690671   33346 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5912/.minikube/cert.pem, removing ...
	I1101 09:23:10.690681   33346 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5912/.minikube/cert.pem
	I1101 09:23:10.690708   33346 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21835-5912/.minikube/cert.pem (1123 bytes)
	I1101 09:23:10.690762   33346 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5912/.minikube/key.pem, removing ...
	I1101 09:23:10.690768   33346 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5912/.minikube/key.pem
	I1101 09:23:10.690801   33346 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21835-5912/.minikube/key.pem (1679 bytes)
	I1101 09:23:10.690851   33346 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21835-5912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca-key.pem org=jenkins.test-preload-168376 san=[127.0.0.1 192.168.39.170 localhost minikube test-preload-168376]
	I1101 09:23:10.841453   33346 provision.go:177] copyRemoteCerts
	I1101 09:23:10.841509   33346 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:23:10.844171   33346 main.go:143] libmachine: domain test-preload-168376 has defined MAC address 52:54:00:2e:6e:26 in network mk-test-preload-168376
	I1101 09:23:10.844514   33346 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2e:6e:26", ip: ""} in network mk-test-preload-168376: {Iface:virbr1 ExpiryTime:2025-11-01 10:23:08 +0000 UTC Type:0 Mac:52:54:00:2e:6e:26 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:test-preload-168376 Clientid:01:52:54:00:2e:6e:26}
	I1101 09:23:10.844533   33346 main.go:143] libmachine: domain test-preload-168376 has defined IP address 192.168.39.170 and MAC address 52:54:00:2e:6e:26 in network mk-test-preload-168376
	I1101 09:23:10.844646   33346 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/test-preload-168376/id_rsa Username:docker}
	I1101 09:23:10.933627   33346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 09:23:10.964328   33346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1101 09:23:10.994955   33346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:23:11.024846   33346 provision.go:87] duration metric: took 340.401565ms to configureAuth
	I1101 09:23:11.024878   33346 buildroot.go:189] setting minikube options for container-runtime
	I1101 09:23:11.025083   33346 config.go:182] Loaded profile config "test-preload-168376": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1101 09:23:11.027983   33346 main.go:143] libmachine: domain test-preload-168376 has defined MAC address 52:54:00:2e:6e:26 in network mk-test-preload-168376
	I1101 09:23:11.028523   33346 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2e:6e:26", ip: ""} in network mk-test-preload-168376: {Iface:virbr1 ExpiryTime:2025-11-01 10:23:08 +0000 UTC Type:0 Mac:52:54:00:2e:6e:26 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:test-preload-168376 Clientid:01:52:54:00:2e:6e:26}
	I1101 09:23:11.028557   33346 main.go:143] libmachine: domain test-preload-168376 has defined IP address 192.168.39.170 and MAC address 52:54:00:2e:6e:26 in network mk-test-preload-168376
	I1101 09:23:11.028758   33346 main.go:143] libmachine: Using SSH client type: native
	I1101 09:23:11.028998   33346 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I1101 09:23:11.029016   33346 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:23:11.288151   33346 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:23:11.288180   33346 machine.go:97] duration metric: took 995.467757ms to provisionDockerMachine
	I1101 09:23:11.288196   33346 start.go:293] postStartSetup for "test-preload-168376" (driver="kvm2")
	I1101 09:23:11.288232   33346 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:23:11.288308   33346 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:23:11.291321   33346 main.go:143] libmachine: domain test-preload-168376 has defined MAC address 52:54:00:2e:6e:26 in network mk-test-preload-168376
	I1101 09:23:11.291699   33346 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2e:6e:26", ip: ""} in network mk-test-preload-168376: {Iface:virbr1 ExpiryTime:2025-11-01 10:23:08 +0000 UTC Type:0 Mac:52:54:00:2e:6e:26 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:test-preload-168376 Clientid:01:52:54:00:2e:6e:26}
	I1101 09:23:11.291723   33346 main.go:143] libmachine: domain test-preload-168376 has defined IP address 192.168.39.170 and MAC address 52:54:00:2e:6e:26 in network mk-test-preload-168376
	I1101 09:23:11.291871   33346 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/test-preload-168376/id_rsa Username:docker}
	I1101 09:23:11.381331   33346 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:23:11.386238   33346 info.go:137] Remote host: Buildroot 2025.02
	I1101 09:23:11.386264   33346 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5912/.minikube/addons for local assets ...
	I1101 09:23:11.386341   33346 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5912/.minikube/files for local assets ...
	I1101 09:23:11.386475   33346 filesync.go:149] local asset: /home/jenkins/minikube-integration/21835-5912/.minikube/files/etc/ssl/certs/97932.pem -> 97932.pem in /etc/ssl/certs
	I1101 09:23:11.386581   33346 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:23:11.398020   33346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/files/etc/ssl/certs/97932.pem --> /etc/ssl/certs/97932.pem (1708 bytes)
	I1101 09:23:11.427750   33346 start.go:296] duration metric: took 139.538051ms for postStartSetup
	I1101 09:23:11.427803   33346 fix.go:56] duration metric: took 14.715710583s for fixHost
	I1101 09:23:11.430671   33346 main.go:143] libmachine: domain test-preload-168376 has defined MAC address 52:54:00:2e:6e:26 in network mk-test-preload-168376
	I1101 09:23:11.431180   33346 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2e:6e:26", ip: ""} in network mk-test-preload-168376: {Iface:virbr1 ExpiryTime:2025-11-01 10:23:08 +0000 UTC Type:0 Mac:52:54:00:2e:6e:26 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:test-preload-168376 Clientid:01:52:54:00:2e:6e:26}
	I1101 09:23:11.431222   33346 main.go:143] libmachine: domain test-preload-168376 has defined IP address 192.168.39.170 and MAC address 52:54:00:2e:6e:26 in network mk-test-preload-168376
	I1101 09:23:11.431423   33346 main.go:143] libmachine: Using SSH client type: native
	I1101 09:23:11.431652   33346 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I1101 09:23:11.431665   33346 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1101 09:23:11.547775   33346 main.go:143] libmachine: SSH cmd err, output: <nil>: 1761988991.503898077
	
	I1101 09:23:11.547803   33346 fix.go:216] guest clock: 1761988991.503898077
	I1101 09:23:11.547810   33346 fix.go:229] Guest: 2025-11-01 09:23:11.503898077 +0000 UTC Remote: 2025-11-01 09:23:11.427808331 +0000 UTC m=+18.345418579 (delta=76.089746ms)
	I1101 09:23:11.547837   33346 fix.go:200] guest clock delta is within tolerance: 76.089746ms
	I1101 09:23:11.547842   33346 start.go:83] releasing machines lock for "test-preload-168376", held for 14.835762076s
	I1101 09:23:11.550526   33346 main.go:143] libmachine: domain test-preload-168376 has defined MAC address 52:54:00:2e:6e:26 in network mk-test-preload-168376
	I1101 09:23:11.550938   33346 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2e:6e:26", ip: ""} in network mk-test-preload-168376: {Iface:virbr1 ExpiryTime:2025-11-01 10:23:08 +0000 UTC Type:0 Mac:52:54:00:2e:6e:26 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:test-preload-168376 Clientid:01:52:54:00:2e:6e:26}
	I1101 09:23:11.550971   33346 main.go:143] libmachine: domain test-preload-168376 has defined IP address 192.168.39.170 and MAC address 52:54:00:2e:6e:26 in network mk-test-preload-168376
	I1101 09:23:11.551516   33346 ssh_runner.go:195] Run: cat /version.json
	I1101 09:23:11.551588   33346 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:23:11.554530   33346 main.go:143] libmachine: domain test-preload-168376 has defined MAC address 52:54:00:2e:6e:26 in network mk-test-preload-168376
	I1101 09:23:11.554806   33346 main.go:143] libmachine: domain test-preload-168376 has defined MAC address 52:54:00:2e:6e:26 in network mk-test-preload-168376
	I1101 09:23:11.554981   33346 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2e:6e:26", ip: ""} in network mk-test-preload-168376: {Iface:virbr1 ExpiryTime:2025-11-01 10:23:08 +0000 UTC Type:0 Mac:52:54:00:2e:6e:26 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:test-preload-168376 Clientid:01:52:54:00:2e:6e:26}
	I1101 09:23:11.555032   33346 main.go:143] libmachine: domain test-preload-168376 has defined IP address 192.168.39.170 and MAC address 52:54:00:2e:6e:26 in network mk-test-preload-168376
	I1101 09:23:11.555274   33346 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/test-preload-168376/id_rsa Username:docker}
	I1101 09:23:11.555293   33346 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2e:6e:26", ip: ""} in network mk-test-preload-168376: {Iface:virbr1 ExpiryTime:2025-11-01 10:23:08 +0000 UTC Type:0 Mac:52:54:00:2e:6e:26 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:test-preload-168376 Clientid:01:52:54:00:2e:6e:26}
	I1101 09:23:11.555325   33346 main.go:143] libmachine: domain test-preload-168376 has defined IP address 192.168.39.170 and MAC address 52:54:00:2e:6e:26 in network mk-test-preload-168376
	I1101 09:23:11.555503   33346 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/test-preload-168376/id_rsa Username:docker}
	I1101 09:23:11.643809   33346 ssh_runner.go:195] Run: systemctl --version
	I1101 09:23:11.672967   33346 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:23:11.825921   33346 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:23:11.832922   33346 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:23:11.832987   33346 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:23:11.853919   33346 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 09:23:11.853948   33346 start.go:496] detecting cgroup driver to use...
	I1101 09:23:11.854023   33346 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:23:11.874126   33346 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:23:11.891629   33346 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:23:11.891705   33346 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:23:11.909268   33346 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:23:11.926544   33346 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:23:12.073616   33346 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:23:12.283621   33346 docker.go:234] disabling docker service ...
	I1101 09:23:12.283699   33346 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:23:12.299834   33346 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:23:12.314895   33346 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:23:12.472063   33346 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:23:12.609679   33346 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:23:12.624613   33346 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:23:12.645893   33346 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1101 09:23:12.645952   33346 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:23:12.657670   33346 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 09:23:12.657723   33346 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:23:12.669292   33346 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:23:12.681164   33346 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:23:12.692954   33346 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:23:12.705301   33346 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:23:12.717328   33346 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:23:12.737940   33346 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:23:12.750355   33346 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:23:12.760765   33346 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 09:23:12.760829   33346 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 09:23:12.781465   33346 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:23:12.793087   33346 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:23:12.936646   33346 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:23:13.056638   33346 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:23:13.056712   33346 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:23:13.061905   33346 start.go:564] Will wait 60s for crictl version
	I1101 09:23:13.061979   33346 ssh_runner.go:195] Run: which crictl
	I1101 09:23:13.066096   33346 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 09:23:13.105411   33346 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1101 09:23:13.105496   33346 ssh_runner.go:195] Run: crio --version
	I1101 09:23:13.133618   33346 ssh_runner.go:195] Run: crio --version
	I1101 09:23:13.163620   33346 out.go:179] * Preparing Kubernetes v1.32.0 on CRI-O 1.29.1 ...
	I1101 09:23:13.167997   33346 main.go:143] libmachine: domain test-preload-168376 has defined MAC address 52:54:00:2e:6e:26 in network mk-test-preload-168376
	I1101 09:23:13.168445   33346 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2e:6e:26", ip: ""} in network mk-test-preload-168376: {Iface:virbr1 ExpiryTime:2025-11-01 10:23:08 +0000 UTC Type:0 Mac:52:54:00:2e:6e:26 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:test-preload-168376 Clientid:01:52:54:00:2e:6e:26}
	I1101 09:23:13.168472   33346 main.go:143] libmachine: domain test-preload-168376 has defined IP address 192.168.39.170 and MAC address 52:54:00:2e:6e:26 in network mk-test-preload-168376
	I1101 09:23:13.168674   33346 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1101 09:23:13.173139   33346 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:23:13.188096   33346 kubeadm.go:884] updating cluster {Name:test-preload-168376 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.32.0 ClusterName:test-preload-168376 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:23:13.188223   33346 preload.go:183] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1101 09:23:13.188286   33346 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:23:13.226392   33346 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.32.0". assuming images are not preloaded.
	I1101 09:23:13.226455   33346 ssh_runner.go:195] Run: which lz4
	I1101 09:23:13.230863   33346 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1101 09:23:13.235649   33346 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 09:23:13.235676   33346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (398646650 bytes)
	I1101 09:23:14.606308   33346 crio.go:462] duration metric: took 1.375476814s to copy over tarball
	I1101 09:23:14.606379   33346 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 09:23:16.273013   33346 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.666609931s)
	I1101 09:23:16.273040   33346 crio.go:469] duration metric: took 1.666700917s to extract the tarball
	I1101 09:23:16.273047   33346 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 09:23:16.313372   33346 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:23:16.359911   33346 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:23:16.359934   33346 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:23:16.359941   33346 kubeadm.go:935] updating node { 192.168.39.170 8443 v1.32.0 crio true true} ...
	I1101 09:23:16.360039   33346 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=test-preload-168376 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.170
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:test-preload-168376 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:23:16.360118   33346 ssh_runner.go:195] Run: crio config
	I1101 09:23:16.409937   33346 cni.go:84] Creating CNI manager for ""
	I1101 09:23:16.409965   33346 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 09:23:16.409990   33346 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:23:16.410019   33346 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.170 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-168376 NodeName:test-preload-168376 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.170"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.170 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:23:16.410176   33346 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.170
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-168376"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.170"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.170"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:23:16.410260   33346 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1101 09:23:16.422526   33346 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:23:16.422599   33346 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:23:16.433969   33346 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I1101 09:23:16.454274   33346 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:23:16.474630   33346 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I1101 09:23:16.495767   33346 ssh_runner.go:195] Run: grep 192.168.39.170	control-plane.minikube.internal$ /etc/hosts
	I1101 09:23:16.499899   33346 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.170	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:23:16.514280   33346 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:23:16.656788   33346 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:23:16.688034   33346 certs.go:69] Setting up /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/test-preload-168376 for IP: 192.168.39.170
	I1101 09:23:16.688073   33346 certs.go:195] generating shared ca certs ...
	I1101 09:23:16.688097   33346 certs.go:227] acquiring lock for ca certs: {Name:mk23a33d19209ad24f4406326ada43ab5cb57960 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:23:16.688294   33346 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21835-5912/.minikube/ca.key
	I1101 09:23:16.688359   33346 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21835-5912/.minikube/proxy-client-ca.key
	I1101 09:23:16.688374   33346 certs.go:257] generating profile certs ...
	I1101 09:23:16.688469   33346 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/test-preload-168376/client.key
	I1101 09:23:16.688532   33346 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/test-preload-168376/apiserver.key.ac842a0d
	I1101 09:23:16.688565   33346 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/test-preload-168376/proxy-client.key
	I1101 09:23:16.688667   33346 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/9793.pem (1338 bytes)
	W1101 09:23:16.688695   33346 certs.go:480] ignoring /home/jenkins/minikube-integration/21835-5912/.minikube/certs/9793_empty.pem, impossibly tiny 0 bytes
	I1101 09:23:16.688701   33346 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 09:23:16.688723   33346 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca.pem (1082 bytes)
	I1101 09:23:16.688742   33346 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:23:16.688763   33346 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/key.pem (1679 bytes)
	I1101 09:23:16.688808   33346 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5912/.minikube/files/etc/ssl/certs/97932.pem (1708 bytes)
	I1101 09:23:16.689365   33346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:23:16.728809   33346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 09:23:16.771246   33346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:23:16.800473   33346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 09:23:16.830293   33346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/test-preload-168376/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1101 09:23:16.859002   33346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/test-preload-168376/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 09:23:16.887772   33346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/test-preload-168376/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:23:16.917949   33346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/test-preload-168376/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 09:23:16.947696   33346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:23:16.976919   33346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/certs/9793.pem --> /usr/share/ca-certificates/9793.pem (1338 bytes)
	I1101 09:23:17.006372   33346 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/files/etc/ssl/certs/97932.pem --> /usr/share/ca-certificates/97932.pem (1708 bytes)
	I1101 09:23:17.036179   33346 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:23:17.057153   33346 ssh_runner.go:195] Run: openssl version
	I1101 09:23:17.063647   33346 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:23:17.077056   33346 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:23:17.082517   33346 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:23:17.082576   33346 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:23:17.089912   33346 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:23:17.103092   33346 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9793.pem && ln -fs /usr/share/ca-certificates/9793.pem /etc/ssl/certs/9793.pem"
	I1101 09:23:17.116451   33346 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9793.pem
	I1101 09:23:17.121766   33346 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 08:38 /usr/share/ca-certificates/9793.pem
	I1101 09:23:17.121834   33346 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9793.pem
	I1101 09:23:17.129168   33346 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9793.pem /etc/ssl/certs/51391683.0"
	I1101 09:23:17.143018   33346 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/97932.pem && ln -fs /usr/share/ca-certificates/97932.pem /etc/ssl/certs/97932.pem"
	I1101 09:23:17.156437   33346 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/97932.pem
	I1101 09:23:17.161475   33346 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 08:38 /usr/share/ca-certificates/97932.pem
	I1101 09:23:17.161546   33346 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/97932.pem
	I1101 09:23:17.168522   33346 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/97932.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:23:17.181596   33346 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:23:17.186661   33346 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 09:23:17.193895   33346 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 09:23:17.200879   33346 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 09:23:17.208307   33346 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 09:23:17.215680   33346 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 09:23:17.222822   33346 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 09:23:17.230041   33346 kubeadm.go:401] StartCluster: {Name:test-preload-168376 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
32.0 ClusterName:test-preload-168376 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:23:17.230125   33346 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:23:17.230171   33346 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:23:17.269957   33346 cri.go:89] found id: ""
	I1101 09:23:17.270030   33346 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:23:17.282628   33346 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 09:23:17.282650   33346 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 09:23:17.282693   33346 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 09:23:17.294219   33346 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:23:17.294561   33346 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-168376" does not appear in /home/jenkins/minikube-integration/21835-5912/kubeconfig
	I1101 09:23:17.294645   33346 kubeconfig.go:62] /home/jenkins/minikube-integration/21835-5912/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-168376" cluster setting kubeconfig missing "test-preload-168376" context setting]
	I1101 09:23:17.294887   33346 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5912/kubeconfig: {Name:mk599bec02e6b7062c3926243176124a4bc71dfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:23:17.295400   33346 kapi.go:59] client config for test-preload-168376: &rest.Config{Host:"https://192.168.39.170:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21835-5912/.minikube/profiles/test-preload-168376/client.crt", KeyFile:"/home/jenkins/minikube-integration/21835-5912/.minikube/profiles/test-preload-168376/client.key", CAFile:"/home/jenkins/minikube-integration/21835-5912/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 09:23:17.295782   33346 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1101 09:23:17.295803   33346 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1101 09:23:17.295808   33346 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1101 09:23:17.295813   33346 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1101 09:23:17.295816   33346 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1101 09:23:17.296116   33346 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 09:23:17.307458   33346 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.39.170
	I1101 09:23:17.307489   33346 kubeadm.go:1161] stopping kube-system containers ...
	I1101 09:23:17.307500   33346 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1101 09:23:17.307544   33346 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:23:17.350704   33346 cri.go:89] found id: ""
	I1101 09:23:17.350803   33346 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1101 09:23:17.369625   33346 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 09:23:17.381337   33346 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 09:23:17.381356   33346 kubeadm.go:158] found existing configuration files:
	
	I1101 09:23:17.381396   33346 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 09:23:17.392295   33346 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 09:23:17.392353   33346 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 09:23:17.403946   33346 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 09:23:17.415028   33346 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 09:23:17.415091   33346 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 09:23:17.426500   33346 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 09:23:17.437669   33346 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 09:23:17.437745   33346 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 09:23:17.449630   33346 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 09:23:17.460739   33346 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 09:23:17.460811   33346 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 09:23:17.472736   33346 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 09:23:17.485820   33346 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 09:23:17.543410   33346 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 09:23:18.577730   33346 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.034285722s)
	I1101 09:23:18.577789   33346 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1101 09:23:18.821127   33346 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 09:23:18.880835   33346 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1101 09:23:18.949625   33346 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:23:18.949706   33346 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:23:19.450120   33346 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:23:19.950474   33346 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:23:20.449862   33346 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:23:20.950063   33346 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:23:21.449834   33346 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:23:21.486106   33346 api_server.go:72] duration metric: took 2.53649196s to wait for apiserver process to appear ...
	I1101 09:23:21.486133   33346 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:23:21.486155   33346 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8443/healthz ...
	I1101 09:23:23.957891   33346 api_server.go:279] https://192.168.39.170:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 09:23:23.957922   33346 api_server.go:103] status: https://192.168.39.170:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 09:23:23.957939   33346 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8443/healthz ...
	I1101 09:23:24.058325   33346 api_server.go:279] https://192.168.39.170:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 09:23:24.058358   33346 api_server.go:103] status: https://192.168.39.170:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 09:23:24.058379   33346 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8443/healthz ...
	I1101 09:23:24.066982   33346 api_server.go:279] https://192.168.39.170:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 09:23:24.067015   33346 api_server.go:103] status: https://192.168.39.170:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 09:23:24.486196   33346 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8443/healthz ...
	I1101 09:23:24.490823   33346 api_server.go:279] https://192.168.39.170:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 09:23:24.490860   33346 api_server.go:103] status: https://192.168.39.170:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 09:23:24.986518   33346 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8443/healthz ...
	I1101 09:23:24.992460   33346 api_server.go:279] https://192.168.39.170:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 09:23:24.992494   33346 api_server.go:103] status: https://192.168.39.170:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 09:23:25.487238   33346 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8443/healthz ...
	I1101 09:23:25.494048   33346 api_server.go:279] https://192.168.39.170:8443/healthz returned 200:
	ok
	I1101 09:23:25.504670   33346 api_server.go:141] control plane version: v1.32.0
	I1101 09:23:25.504693   33346 api_server.go:131] duration metric: took 4.018554374s to wait for apiserver health ...
	I1101 09:23:25.504702   33346 cni.go:84] Creating CNI manager for ""
	I1101 09:23:25.504708   33346 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 09:23:25.506267   33346 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 09:23:25.507534   33346 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 09:23:25.526899   33346 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1101 09:23:25.565543   33346 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:23:25.571667   33346 system_pods.go:59] 7 kube-system pods found
	I1101 09:23:25.571715   33346 system_pods.go:61] "coredns-668d6bf9bc-wjj7t" [a8d1c095-21d4-4a44-8dba-65b727cd4e03] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:23:25.571729   33346 system_pods.go:61] "etcd-test-preload-168376" [357ed5b4-8fc1-47f0-890d-03742c861b3c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:23:25.571746   33346 system_pods.go:61] "kube-apiserver-test-preload-168376" [818054e2-bdd3-470f-b984-508a30821c65] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:23:25.571754   33346 system_pods.go:61] "kube-controller-manager-test-preload-168376" [7a328e40-14f0-45c5-b6a9-60be9afdd011] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:23:25.571762   33346 system_pods.go:61] "kube-proxy-4cnd6" [e491c200-c23c-49cb-a67f-7f4d1a0dd161] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 09:23:25.571774   33346 system_pods.go:61] "kube-scheduler-test-preload-168376" [cecb10e6-29e4-4b95-9b4b-afaedc589a70] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:23:25.571789   33346 system_pods.go:61] "storage-provisioner" [82b5a635-9ddc-406c-8bcb-5fb53c95bad6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:23:25.571800   33346 system_pods.go:74] duration metric: took 6.231231ms to wait for pod list to return data ...
	I1101 09:23:25.571813   33346 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:23:25.577115   33346 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 09:23:25.577145   33346 node_conditions.go:123] node cpu capacity is 2
	I1101 09:23:25.577161   33346 node_conditions.go:105] duration metric: took 5.343273ms to run NodePressure ...
	I1101 09:23:25.577227   33346 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.32.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 09:23:25.846786   33346 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1101 09:23:25.851335   33346 kubeadm.go:744] kubelet initialised
	I1101 09:23:25.851356   33346 kubeadm.go:745] duration metric: took 4.541542ms waiting for restarted kubelet to initialise ...
	I1101 09:23:25.851369   33346 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 09:23:25.870560   33346 ops.go:34] apiserver oom_adj: -16
	I1101 09:23:25.870592   33346 kubeadm.go:602] duration metric: took 8.587933239s to restartPrimaryControlPlane
	I1101 09:23:25.870605   33346 kubeadm.go:403] duration metric: took 8.640571406s to StartCluster
	I1101 09:23:25.870625   33346 settings.go:142] acquiring lock: {Name:mk818d33e162ca33774e3ab05f6aac30f8feaf64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:23:25.870697   33346 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21835-5912/kubeconfig
	I1101 09:23:25.871464   33346 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5912/kubeconfig: {Name:mk599bec02e6b7062c3926243176124a4bc71dfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:23:25.871783   33346 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:23:25.871868   33346 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:23:25.871956   33346 addons.go:70] Setting storage-provisioner=true in profile "test-preload-168376"
	I1101 09:23:25.871975   33346 addons.go:239] Setting addon storage-provisioner=true in "test-preload-168376"
	W1101 09:23:25.871995   33346 addons.go:248] addon storage-provisioner should already be in state true
	I1101 09:23:25.871993   33346 addons.go:70] Setting default-storageclass=true in profile "test-preload-168376"
	I1101 09:23:25.872017   33346 config.go:182] Loaded profile config "test-preload-168376": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1101 09:23:25.872024   33346 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "test-preload-168376"
	I1101 09:23:25.872023   33346 host.go:66] Checking if "test-preload-168376" exists ...
	I1101 09:23:25.874628   33346 out.go:179] * Verifying Kubernetes components...
	I1101 09:23:25.874630   33346 kapi.go:59] client config for test-preload-168376: &rest.Config{Host:"https://192.168.39.170:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21835-5912/.minikube/profiles/test-preload-168376/client.crt", KeyFile:"/home/jenkins/minikube-integration/21835-5912/.minikube/profiles/test-preload-168376/client.key", CAFile:"/home/jenkins/minikube-integration/21835-5912/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 09:23:25.875350   33346 addons.go:239] Setting addon default-storageclass=true in "test-preload-168376"
	W1101 09:23:25.875364   33346 addons.go:248] addon default-storageclass should already be in state true
	I1101 09:23:25.875382   33346 host.go:66] Checking if "test-preload-168376" exists ...
	I1101 09:23:25.875667   33346 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:23:25.876430   33346 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:23:25.877102   33346 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:23:25.877117   33346 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:23:25.877259   33346 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:23:25.877274   33346 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:23:25.880328   33346 main.go:143] libmachine: domain test-preload-168376 has defined MAC address 52:54:00:2e:6e:26 in network mk-test-preload-168376
	I1101 09:23:25.880514   33346 main.go:143] libmachine: domain test-preload-168376 has defined MAC address 52:54:00:2e:6e:26 in network mk-test-preload-168376
	I1101 09:23:25.880753   33346 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2e:6e:26", ip: ""} in network mk-test-preload-168376: {Iface:virbr1 ExpiryTime:2025-11-01 10:23:08 +0000 UTC Type:0 Mac:52:54:00:2e:6e:26 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:test-preload-168376 Clientid:01:52:54:00:2e:6e:26}
	I1101 09:23:25.880782   33346 main.go:143] libmachine: domain test-preload-168376 has defined IP address 192.168.39.170 and MAC address 52:54:00:2e:6e:26 in network mk-test-preload-168376
	I1101 09:23:25.880930   33346 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/test-preload-168376/id_rsa Username:docker}
	I1101 09:23:25.880944   33346 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:2e:6e:26", ip: ""} in network mk-test-preload-168376: {Iface:virbr1 ExpiryTime:2025-11-01 10:23:08 +0000 UTC Type:0 Mac:52:54:00:2e:6e:26 Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:test-preload-168376 Clientid:01:52:54:00:2e:6e:26}
	I1101 09:23:25.880967   33346 main.go:143] libmachine: domain test-preload-168376 has defined IP address 192.168.39.170 and MAC address 52:54:00:2e:6e:26 in network mk-test-preload-168376
	I1101 09:23:25.881167   33346 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/test-preload-168376/id_rsa Username:docker}
	I1101 09:23:26.130634   33346 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:23:26.159133   33346 node_ready.go:35] waiting up to 6m0s for node "test-preload-168376" to be "Ready" ...
	I1101 09:23:26.162857   33346 node_ready.go:49] node "test-preload-168376" is "Ready"
	I1101 09:23:26.162887   33346 node_ready.go:38] duration metric: took 3.690038ms for node "test-preload-168376" to be "Ready" ...
	I1101 09:23:26.162899   33346 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:23:26.162951   33346 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:23:26.184260   33346 api_server.go:72] duration metric: took 312.438237ms to wait for apiserver process to appear ...
	I1101 09:23:26.184285   33346 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:23:26.184302   33346 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8443/healthz ...
	I1101 09:23:26.190377   33346 api_server.go:279] https://192.168.39.170:8443/healthz returned 200:
	ok
	I1101 09:23:26.191449   33346 api_server.go:141] control plane version: v1.32.0
	I1101 09:23:26.191469   33346 api_server.go:131] duration metric: took 7.177956ms to wait for apiserver health ...
	I1101 09:23:26.191476   33346 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:23:26.194317   33346 system_pods.go:59] 7 kube-system pods found
	I1101 09:23:26.194341   33346 system_pods.go:61] "coredns-668d6bf9bc-wjj7t" [a8d1c095-21d4-4a44-8dba-65b727cd4e03] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:23:26.194348   33346 system_pods.go:61] "etcd-test-preload-168376" [357ed5b4-8fc1-47f0-890d-03742c861b3c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:23:26.194356   33346 system_pods.go:61] "kube-apiserver-test-preload-168376" [818054e2-bdd3-470f-b984-508a30821c65] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:23:26.194362   33346 system_pods.go:61] "kube-controller-manager-test-preload-168376" [7a328e40-14f0-45c5-b6a9-60be9afdd011] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:23:26.194366   33346 system_pods.go:61] "kube-proxy-4cnd6" [e491c200-c23c-49cb-a67f-7f4d1a0dd161] Running
	I1101 09:23:26.194371   33346 system_pods.go:61] "kube-scheduler-test-preload-168376" [cecb10e6-29e4-4b95-9b4b-afaedc589a70] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:23:26.194377   33346 system_pods.go:61] "storage-provisioner" [82b5a635-9ddc-406c-8bcb-5fb53c95bad6] Running
	I1101 09:23:26.194383   33346 system_pods.go:74] duration metric: took 2.901507ms to wait for pod list to return data ...
	I1101 09:23:26.194392   33346 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:23:26.196570   33346 default_sa.go:45] found service account: "default"
	I1101 09:23:26.196597   33346 default_sa.go:55] duration metric: took 2.196936ms for default service account to be created ...
	I1101 09:23:26.196609   33346 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:23:26.200007   33346 system_pods.go:86] 7 kube-system pods found
	I1101 09:23:26.200031   33346 system_pods.go:89] "coredns-668d6bf9bc-wjj7t" [a8d1c095-21d4-4a44-8dba-65b727cd4e03] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:23:26.200037   33346 system_pods.go:89] "etcd-test-preload-168376" [357ed5b4-8fc1-47f0-890d-03742c861b3c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:23:26.200044   33346 system_pods.go:89] "kube-apiserver-test-preload-168376" [818054e2-bdd3-470f-b984-508a30821c65] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:23:26.200049   33346 system_pods.go:89] "kube-controller-manager-test-preload-168376" [7a328e40-14f0-45c5-b6a9-60be9afdd011] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:23:26.200057   33346 system_pods.go:89] "kube-proxy-4cnd6" [e491c200-c23c-49cb-a67f-7f4d1a0dd161] Running
	I1101 09:23:26.200062   33346 system_pods.go:89] "kube-scheduler-test-preload-168376" [cecb10e6-29e4-4b95-9b4b-afaedc589a70] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:23:26.200065   33346 system_pods.go:89] "storage-provisioner" [82b5a635-9ddc-406c-8bcb-5fb53c95bad6] Running
	I1101 09:23:26.200071   33346 system_pods.go:126] duration metric: took 3.456359ms to wait for k8s-apps to be running ...
	I1101 09:23:26.200076   33346 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:23:26.200125   33346 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:23:26.233202   33346 system_svc.go:56] duration metric: took 33.113364ms WaitForService to wait for kubelet
	I1101 09:23:26.233256   33346 kubeadm.go:587] duration metric: took 361.435557ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:23:26.233279   33346 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:23:26.238862   33346 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 09:23:26.238889   33346 node_conditions.go:123] node cpu capacity is 2
	I1101 09:23:26.238901   33346 node_conditions.go:105] duration metric: took 5.616233ms to run NodePressure ...
	I1101 09:23:26.238916   33346 start.go:242] waiting for startup goroutines ...
	I1101 09:23:26.292272   33346 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:23:26.292943   33346 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:23:26.923925   33346 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 09:23:26.925452   33346 addons.go:515] duration metric: took 1.053600283s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 09:23:26.925492   33346 start.go:247] waiting for cluster config update ...
	I1101 09:23:26.925507   33346 start.go:256] writing updated cluster config ...
	I1101 09:23:26.925764   33346 ssh_runner.go:195] Run: rm -f paused
	I1101 09:23:26.931580   33346 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:23:26.932316   33346 kapi.go:59] client config for test-preload-168376: &rest.Config{Host:"https://192.168.39.170:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21835-5912/.minikube/profiles/test-preload-168376/client.crt", KeyFile:"/home/jenkins/minikube-integration/21835-5912/.minikube/profiles/test-preload-168376/client.key", CAFile:"/home/jenkins/minikube-integration/21835-5912/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 09:23:26.936638   33346 pod_ready.go:83] waiting for pod "coredns-668d6bf9bc-wjj7t" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 09:23:28.943645   33346 pod_ready.go:104] pod "coredns-668d6bf9bc-wjj7t" is not "Ready", error: <nil>
	I1101 09:23:29.942334   33346 pod_ready.go:94] pod "coredns-668d6bf9bc-wjj7t" is "Ready"
	I1101 09:23:29.942359   33346 pod_ready.go:86] duration metric: took 3.005695587s for pod "coredns-668d6bf9bc-wjj7t" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:23:29.945026   33346 pod_ready.go:83] waiting for pod "etcd-test-preload-168376" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 09:23:31.950785   33346 pod_ready.go:104] pod "etcd-test-preload-168376" is not "Ready", error: <nil>
	I1101 09:23:33.451527   33346 pod_ready.go:94] pod "etcd-test-preload-168376" is "Ready"
	I1101 09:23:33.451553   33346 pod_ready.go:86] duration metric: took 3.506488153s for pod "etcd-test-preload-168376" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:23:33.453863   33346 pod_ready.go:83] waiting for pod "kube-apiserver-test-preload-168376" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 09:23:35.459321   33346 pod_ready.go:104] pod "kube-apiserver-test-preload-168376" is not "Ready", error: <nil>
	W1101 09:23:37.459983   33346 pod_ready.go:104] pod "kube-apiserver-test-preload-168376" is not "Ready", error: <nil>
	W1101 09:23:39.460348   33346 pod_ready.go:104] pod "kube-apiserver-test-preload-168376" is not "Ready", error: <nil>
	I1101 09:23:39.960318   33346 pod_ready.go:94] pod "kube-apiserver-test-preload-168376" is "Ready"
	I1101 09:23:39.960342   33346 pod_ready.go:86] duration metric: took 6.506454569s for pod "kube-apiserver-test-preload-168376" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:23:39.962458   33346 pod_ready.go:83] waiting for pod "kube-controller-manager-test-preload-168376" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:23:39.967149   33346 pod_ready.go:94] pod "kube-controller-manager-test-preload-168376" is "Ready"
	I1101 09:23:39.967170   33346 pod_ready.go:86] duration metric: took 4.693277ms for pod "kube-controller-manager-test-preload-168376" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:23:39.969052   33346 pod_ready.go:83] waiting for pod "kube-proxy-4cnd6" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:23:39.972837   33346 pod_ready.go:94] pod "kube-proxy-4cnd6" is "Ready"
	I1101 09:23:39.972855   33346 pod_ready.go:86] duration metric: took 3.779287ms for pod "kube-proxy-4cnd6" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:23:39.975582   33346 pod_ready.go:83] waiting for pod "kube-scheduler-test-preload-168376" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:23:40.980843   33346 pod_ready.go:94] pod "kube-scheduler-test-preload-168376" is "Ready"
	I1101 09:23:40.980871   33346 pod_ready.go:86] duration metric: took 1.005260546s for pod "kube-scheduler-test-preload-168376" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:23:40.980882   33346 pod_ready.go:40] duration metric: took 14.049257431s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:23:41.025059   33346 start.go:628] kubectl: 1.34.1, cluster: 1.32.0 (minor skew: 2)
	I1101 09:23:41.027254   33346 out.go:203] 
	W1101 09:23:41.028568   33346 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.32.0.
	I1101 09:23:41.030165   33346 out.go:179]   - Want kubectl v1.32.0? Try 'minikube kubectl -- get pods -A'
	I1101 09:23:41.031575   33346 out.go:179] * Done! kubectl is now configured to use "test-preload-168376" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 09:23:41 test-preload-168376 crio[836]: time="2025-11-01 09:23:41.834542530Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761989021834519720,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=db5a2e67-23c3-4908-88fd-4c479d40750a name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:23:41 test-preload-168376 crio[836]: time="2025-11-01 09:23:41.835153100Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3e567513-46af-4791-b7aa-c7d9cd6f938d name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:23:41 test-preload-168376 crio[836]: time="2025-11-01 09:23:41.835278130Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3e567513-46af-4791-b7aa-c7d9cd6f938d name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:23:41 test-preload-168376 crio[836]: time="2025-11-01 09:23:41.835769591Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aa96a3657989950a701e8aac72b1913d930f78c415d5601a8ccaf1b5d7d18fdc,PodSandboxId:bde938e314e6c7c4091552f0bd26caa586f02d20be6dea559bde7bda8f1798da,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761989008959195740,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-wjj7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8d1c095-21d4-4a44-8dba-65b727cd4e03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d64acf91c4210bdc19b0f259b557d09debce4f87901b60a6b86d87cbb5e7faa7,PodSandboxId:95b7d04681acf9b86ddbc8411492a89d18523f234f0d2b53d7070e5a56a1006e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1761989005365828656,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4cnd6,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: e491c200-c23c-49cb-a67f-7f4d1a0dd161,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c092fdf0eaecb8c0558c7c4a759a800d38e70a6d0a59f2af5c08e66959cbdf4a,PodSandboxId:1ba775a9ce19585b11378c58b5095368168b3c89282895d622f21a3029aec31f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761989005344608108,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82
b5a635-9ddc-406c-8bcb-5fb53c95bad6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7058bb9289d851c5f41feee507c1ead54787c306cd0db4bf9bee62307851e71,PodSandboxId:4ac191e40d2de1740ba29c44453fce39d07b7dcae876ca014fe00fe4098e3045,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1761989001100343691,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-168376,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5422e51928abddd39a53e97034b57cfa,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75e05aa86afcc117607f1b9b347e71bd21e6e8c06690628fdea64986428c4d66,PodSandboxId:44afd952e62da71be16b8b21b9d18f1b0efdda11acf44039da6eb7b1cd53a98b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1761989001090507436,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-168376,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e941f3470f00caafc8a4dee6d41f55c0,},Annotations:map
[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e1286a4966bb461ad33c350cfa6d497c8acfdfa629cf3faaa2869615258c7d3,PodSandboxId:534f62a541b70e13446090a059a6189180c2471f293fa87e5b8c3daf975a8a2d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1761989001110641073,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-168376,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21640c3a63d5e22a33fe9b0a287b6f24,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7517f73441998747f90dc7695b6c1ce6cf2a2934df2bb4c22a94188b303534bb,PodSandboxId:15fcd960aa35af4d0263747a704803854469c7aaf3a2e8863e65f8c767810c91,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1761989001067245980,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-168376,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d77c56f975b25263da49b87d1505d08,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3e567513-46af-4791-b7aa-c7d9cd6f938d name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:23:41 test-preload-168376 crio[836]: time="2025-11-01 09:23:41.874790969Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=01bd2505-b02a-419f-9d79-30cc67fa93bb name=/runtime.v1.RuntimeService/Version
	Nov 01 09:23:41 test-preload-168376 crio[836]: time="2025-11-01 09:23:41.874892179Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=01bd2505-b02a-419f-9d79-30cc67fa93bb name=/runtime.v1.RuntimeService/Version
	Nov 01 09:23:41 test-preload-168376 crio[836]: time="2025-11-01 09:23:41.879416349Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=90b9d19d-3440-46e9-94c1-1d97ee65e57b name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:23:41 test-preload-168376 crio[836]: time="2025-11-01 09:23:41.879814600Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761989021879793492,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=90b9d19d-3440-46e9-94c1-1d97ee65e57b name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:23:41 test-preload-168376 crio[836]: time="2025-11-01 09:23:41.880477940Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2c8156a8-a570-49de-9970-f38d019a789d name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:23:41 test-preload-168376 crio[836]: time="2025-11-01 09:23:41.880705050Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2c8156a8-a570-49de-9970-f38d019a789d name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:23:41 test-preload-168376 crio[836]: time="2025-11-01 09:23:41.880866096Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aa96a3657989950a701e8aac72b1913d930f78c415d5601a8ccaf1b5d7d18fdc,PodSandboxId:bde938e314e6c7c4091552f0bd26caa586f02d20be6dea559bde7bda8f1798da,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761989008959195740,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-wjj7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8d1c095-21d4-4a44-8dba-65b727cd4e03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d64acf91c4210bdc19b0f259b557d09debce4f87901b60a6b86d87cbb5e7faa7,PodSandboxId:95b7d04681acf9b86ddbc8411492a89d18523f234f0d2b53d7070e5a56a1006e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1761989005365828656,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4cnd6,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: e491c200-c23c-49cb-a67f-7f4d1a0dd161,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c092fdf0eaecb8c0558c7c4a759a800d38e70a6d0a59f2af5c08e66959cbdf4a,PodSandboxId:1ba775a9ce19585b11378c58b5095368168b3c89282895d622f21a3029aec31f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761989005344608108,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82
b5a635-9ddc-406c-8bcb-5fb53c95bad6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7058bb9289d851c5f41feee507c1ead54787c306cd0db4bf9bee62307851e71,PodSandboxId:4ac191e40d2de1740ba29c44453fce39d07b7dcae876ca014fe00fe4098e3045,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1761989001100343691,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-168376,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5422e51928abddd39a53e97034b57cfa,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75e05aa86afcc117607f1b9b347e71bd21e6e8c06690628fdea64986428c4d66,PodSandboxId:44afd952e62da71be16b8b21b9d18f1b0efdda11acf44039da6eb7b1cd53a98b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1761989001090507436,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-168376,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e941f3470f00caafc8a4dee6d41f55c0,},Annotations:map
[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e1286a4966bb461ad33c350cfa6d497c8acfdfa629cf3faaa2869615258c7d3,PodSandboxId:534f62a541b70e13446090a059a6189180c2471f293fa87e5b8c3daf975a8a2d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1761989001110641073,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-168376,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21640c3a63d5e22a33fe9b0a287b6f24,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7517f73441998747f90dc7695b6c1ce6cf2a2934df2bb4c22a94188b303534bb,PodSandboxId:15fcd960aa35af4d0263747a704803854469c7aaf3a2e8863e65f8c767810c91,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1761989001067245980,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-168376,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d77c56f975b25263da49b87d1505d08,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2c8156a8-a570-49de-9970-f38d019a789d name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:23:41 test-preload-168376 crio[836]: time="2025-11-01 09:23:41.920685868Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b1a65dab-e62c-4537-89a7-3e34643b4373 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:23:41 test-preload-168376 crio[836]: time="2025-11-01 09:23:41.920772757Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b1a65dab-e62c-4537-89a7-3e34643b4373 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:23:41 test-preload-168376 crio[836]: time="2025-11-01 09:23:41.922427979Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=90b51f44-dafb-47c6-9330-f94c8ae08e1e name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:23:41 test-preload-168376 crio[836]: time="2025-11-01 09:23:41.923347471Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761989021923321387,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=90b51f44-dafb-47c6-9330-f94c8ae08e1e name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:23:41 test-preload-168376 crio[836]: time="2025-11-01 09:23:41.923905220Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5ce4972e-ce1e-4462-9238-d3af1b6c6a18 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:23:41 test-preload-168376 crio[836]: time="2025-11-01 09:23:41.924051831Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5ce4972e-ce1e-4462-9238-d3af1b6c6a18 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:23:41 test-preload-168376 crio[836]: time="2025-11-01 09:23:41.924336402Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aa96a3657989950a701e8aac72b1913d930f78c415d5601a8ccaf1b5d7d18fdc,PodSandboxId:bde938e314e6c7c4091552f0bd26caa586f02d20be6dea559bde7bda8f1798da,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761989008959195740,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-wjj7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8d1c095-21d4-4a44-8dba-65b727cd4e03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d64acf91c4210bdc19b0f259b557d09debce4f87901b60a6b86d87cbb5e7faa7,PodSandboxId:95b7d04681acf9b86ddbc8411492a89d18523f234f0d2b53d7070e5a56a1006e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1761989005365828656,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4cnd6,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: e491c200-c23c-49cb-a67f-7f4d1a0dd161,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c092fdf0eaecb8c0558c7c4a759a800d38e70a6d0a59f2af5c08e66959cbdf4a,PodSandboxId:1ba775a9ce19585b11378c58b5095368168b3c89282895d622f21a3029aec31f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761989005344608108,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82
b5a635-9ddc-406c-8bcb-5fb53c95bad6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7058bb9289d851c5f41feee507c1ead54787c306cd0db4bf9bee62307851e71,PodSandboxId:4ac191e40d2de1740ba29c44453fce39d07b7dcae876ca014fe00fe4098e3045,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1761989001100343691,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-168376,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5422e51928abddd39a53e97034b57cfa,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75e05aa86afcc117607f1b9b347e71bd21e6e8c06690628fdea64986428c4d66,PodSandboxId:44afd952e62da71be16b8b21b9d18f1b0efdda11acf44039da6eb7b1cd53a98b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1761989001090507436,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-168376,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e941f3470f00caafc8a4dee6d41f55c0,},Annotations:map
[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e1286a4966bb461ad33c350cfa6d497c8acfdfa629cf3faaa2869615258c7d3,PodSandboxId:534f62a541b70e13446090a059a6189180c2471f293fa87e5b8c3daf975a8a2d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1761989001110641073,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-168376,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21640c3a63d5e22a33fe9b0a287b6f24,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7517f73441998747f90dc7695b6c1ce6cf2a2934df2bb4c22a94188b303534bb,PodSandboxId:15fcd960aa35af4d0263747a704803854469c7aaf3a2e8863e65f8c767810c91,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1761989001067245980,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-168376,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d77c56f975b25263da49b87d1505d08,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5ce4972e-ce1e-4462-9238-d3af1b6c6a18 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:23:41 test-preload-168376 crio[836]: time="2025-11-01 09:23:41.960338200Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a5e5f990-ea05-49c8-9d0b-4e244601fafe name=/runtime.v1.RuntimeService/Version
	Nov 01 09:23:41 test-preload-168376 crio[836]: time="2025-11-01 09:23:41.960422199Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a5e5f990-ea05-49c8-9d0b-4e244601fafe name=/runtime.v1.RuntimeService/Version
	Nov 01 09:23:41 test-preload-168376 crio[836]: time="2025-11-01 09:23:41.962568975Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d001a0db-b3ff-44e7-9317-40fda93cfeac name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:23:41 test-preload-168376 crio[836]: time="2025-11-01 09:23:41.963343580Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761989021963316970,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d001a0db-b3ff-44e7-9317-40fda93cfeac name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:23:41 test-preload-168376 crio[836]: time="2025-11-01 09:23:41.963848892Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1c558adf-ae5e-4933-8ba6-dc3c868e01d2 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:23:41 test-preload-168376 crio[836]: time="2025-11-01 09:23:41.963897803Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1c558adf-ae5e-4933-8ba6-dc3c868e01d2 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:23:41 test-preload-168376 crio[836]: time="2025-11-01 09:23:41.964046039Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aa96a3657989950a701e8aac72b1913d930f78c415d5601a8ccaf1b5d7d18fdc,PodSandboxId:bde938e314e6c7c4091552f0bd26caa586f02d20be6dea559bde7bda8f1798da,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1761989008959195740,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-668d6bf9bc-wjj7t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8d1c095-21d4-4a44-8dba-65b727cd4e03,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d64acf91c4210bdc19b0f259b557d09debce4f87901b60a6b86d87cbb5e7faa7,PodSandboxId:95b7d04681acf9b86ddbc8411492a89d18523f234f0d2b53d7070e5a56a1006e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08,State:CONTAINER_RUNNING,CreatedAt:1761989005365828656,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-4cnd6,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: e491c200-c23c-49cb-a67f-7f4d1a0dd161,},Annotations:map[string]string{io.kubernetes.container.hash: 8f247ea6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c092fdf0eaecb8c0558c7c4a759a800d38e70a6d0a59f2af5c08e66959cbdf4a,PodSandboxId:1ba775a9ce19585b11378c58b5095368168b3c89282895d622f21a3029aec31f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1761989005344608108,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82
b5a635-9ddc-406c-8bcb-5fb53c95bad6,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7058bb9289d851c5f41feee507c1ead54787c306cd0db4bf9bee62307851e71,PodSandboxId:4ac191e40d2de1740ba29c44453fce39d07b7dcae876ca014fe00fe4098e3045,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,State:CONTAINER_RUNNING,CreatedAt:1761989001100343691,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-168376,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5422e51928abddd39a53e97034b57cfa,},Anno
tations:map[string]string{io.kubernetes.container.hash: e68be80f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75e05aa86afcc117607f1b9b347e71bd21e6e8c06690628fdea64986428c4d66,PodSandboxId:44afd952e62da71be16b8b21b9d18f1b0efdda11acf44039da6eb7b1cd53a98b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5,State:CONTAINER_RUNNING,CreatedAt:1761989001090507436,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-168376,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e941f3470f00caafc8a4dee6d41f55c0,},Annotations:map
[string]string{io.kubernetes.container.hash: 8c4b12d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e1286a4966bb461ad33c350cfa6d497c8acfdfa629cf3faaa2869615258c7d3,PodSandboxId:534f62a541b70e13446090a059a6189180c2471f293fa87e5b8c3daf975a8a2d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3,State:CONTAINER_RUNNING,CreatedAt:1761989001110641073,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-168376,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21640c3a63d5e22a33fe9b0a287b6f24,}
,Annotations:map[string]string{io.kubernetes.container.hash: 99f3a73e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7517f73441998747f90dc7695b6c1ce6cf2a2934df2bb4c22a94188b303534bb,PodSandboxId:15fcd960aa35af4d0263747a704803854469c7aaf3a2e8863e65f8c767810c91,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4,State:CONTAINER_RUNNING,CreatedAt:1761989001067245980,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-168376,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d77c56f975b25263da49b87d1505d08,},Annotation
s:map[string]string{io.kubernetes.container.hash: bf915d6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1c558adf-ae5e-4933-8ba6-dc3c868e01d2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	aa96a36579899       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   13 seconds ago      Running             coredns                   1                   bde938e314e6c       coredns-668d6bf9bc-wjj7t
	d64acf91c4210       040f9f8aac8cd21d78f05ebfa9621ffb84e3257300c3cb1f72b539a3c3a2cd08   16 seconds ago      Running             kube-proxy                1                   95b7d04681acf       kube-proxy-4cnd6
	c092fdf0eaecb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 seconds ago      Running             storage-provisioner       1                   1ba775a9ce195       storage-provisioner
	6e1286a4966bb       8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3   20 seconds ago      Running             kube-controller-manager   1                   534f62a541b70       kube-controller-manager-test-preload-168376
	d7058bb9289d8       a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc   20 seconds ago      Running             etcd                      1                   4ac191e40d2de       etcd-test-preload-168376
	75e05aa86afcc       a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5   20 seconds ago      Running             kube-scheduler            1                   44afd952e62da       kube-scheduler-test-preload-168376
	7517f73441998       c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4   20 seconds ago      Running             kube-apiserver            1                   15fcd960aa35a       kube-apiserver-test-preload-168376
	
	
	==> coredns [aa96a3657989950a701e8aac72b1913d930f78c415d5601a8ccaf1b5d7d18fdc] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:54383 - 38515 "HINFO IN 3391208439751754261.9185557593551824493. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.12013934s
	
	
	==> describe nodes <==
	Name:               test-preload-168376
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-168376
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
	                    minikube.k8s.io/name=test-preload-168376
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_22_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:21:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-168376
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:23:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:23:26 +0000   Sat, 01 Nov 2025 09:21:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:23:26 +0000   Sat, 01 Nov 2025 09:21:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:23:26 +0000   Sat, 01 Nov 2025 09:21:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:23:26 +0000   Sat, 01 Nov 2025 09:23:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.170
	  Hostname:    test-preload-168376
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef1c7fcab6ff4b5b9eb48f4490239c3d
	  System UUID:                ef1c7fca-b6ff-4b5b-9eb4-8f4490239c3d
	  Boot ID:                    f8908b5c-1726-4a8a-9872-a9f3d2475ca2
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.32.0
	  Kube-Proxy Version:         v1.32.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-wjj7t                       100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     96s
	  kube-system                 etcd-test-preload-168376                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         101s
	  kube-system                 kube-apiserver-test-preload-168376             250m (12%)    0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-controller-manager-test-preload-168376    200m (10%)    0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-proxy-4cnd6                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 kube-scheduler-test-preload-168376             100m (5%)     0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 94s                  kube-proxy       
	  Normal   Starting                 16s                  kube-proxy       
	  Normal   Starting                 107s                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  107s (x8 over 107s)  kubelet          Node test-preload-168376 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    107s (x8 over 107s)  kubelet          Node test-preload-168376 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     107s (x7 over 107s)  kubelet          Node test-preload-168376 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  107s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 102s                 kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    101s                 kubelet          Node test-preload-168376 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  101s                 kubelet          Node test-preload-168376 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     101s                 kubelet          Node test-preload-168376 status is now: NodeHasSufficientPID
	  Normal   NodeReady                101s                 kubelet          Node test-preload-168376 status is now: NodeReady
	  Normal   NodeAllocatableEnforced  101s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           97s                  node-controller  Node test-preload-168376 event: Registered Node test-preload-168376 in Controller
	  Normal   Starting                 24s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  23s (x8 over 24s)    kubelet          Node test-preload-168376 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    23s (x8 over 24s)    kubelet          Node test-preload-168376 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     23s (x7 over 24s)    kubelet          Node test-preload-168376 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 18s                  kubelet          Node test-preload-168376 has been rebooted, boot id: f8908b5c-1726-4a8a-9872-a9f3d2475ca2
	  Normal   RegisteredNode           15s                  node-controller  Node test-preload-168376 event: Registered Node test-preload-168376 in Controller
	
	
	==> dmesg <==
	[Nov 1 09:23] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001191] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.005049] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +1.021273] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.103190] kauditd_printk_skb: 88 callbacks suppressed
	[  +6.605860] kauditd_printk_skb: 205 callbacks suppressed
	[ +10.512274] kauditd_printk_skb: 203 callbacks suppressed
	
	
	==> etcd [d7058bb9289d851c5f41feee507c1ead54787c306cd0db4bf9bee62307851e71] <==
	{"level":"info","ts":"2025-11-01T09:23:21.538215Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b385368e7357343 switched to configuration voters=(7726016870774829891)"}
	{"level":"info","ts":"2025-11-01T09:23:21.538311Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"73eff271b33bb37a","local-member-id":"6b385368e7357343","added-peer-id":"6b385368e7357343","added-peer-peer-urls":["https://192.168.39.170:2380"]}
	{"level":"info","ts":"2025-11-01T09:23:21.538510Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"73eff271b33bb37a","local-member-id":"6b385368e7357343","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T09:23:21.538555Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T09:23:21.547378Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-01T09:23:21.547617Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"6b385368e7357343","initial-advertise-peer-urls":["https://192.168.39.170:2380"],"listen-peer-urls":["https://192.168.39.170:2380"],"advertise-client-urls":["https://192.168.39.170:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.170:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-01T09:23:21.547665Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-01T09:23:21.551581Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"192.168.39.170:2380"}
	{"level":"info","ts":"2025-11-01T09:23:21.551610Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"192.168.39.170:2380"}
	{"level":"info","ts":"2025-11-01T09:23:22.808408Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b385368e7357343 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-01T09:23:22.808479Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b385368e7357343 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-01T09:23:22.808526Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b385368e7357343 received MsgPreVoteResp from 6b385368e7357343 at term 2"}
	{"level":"info","ts":"2025-11-01T09:23:22.808540Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b385368e7357343 became candidate at term 3"}
	{"level":"info","ts":"2025-11-01T09:23:22.808551Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b385368e7357343 received MsgVoteResp from 6b385368e7357343 at term 3"}
	{"level":"info","ts":"2025-11-01T09:23:22.808560Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b385368e7357343 became leader at term 3"}
	{"level":"info","ts":"2025-11-01T09:23:22.808569Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6b385368e7357343 elected leader 6b385368e7357343 at term 3"}
	{"level":"info","ts":"2025-11-01T09:23:22.813261Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"6b385368e7357343","local-member-attributes":"{Name:test-preload-168376 ClientURLs:[https://192.168.39.170:2379]}","request-path":"/0/members/6b385368e7357343/attributes","cluster-id":"73eff271b33bb37a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-01T09:23:22.813381Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T09:23:22.813428Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T09:23:22.813851Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-01T09:23:22.813919Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-01T09:23:22.814526Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-01T09:23:22.814603Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-11-01T09:23:22.815356Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.170:2379"}
	{"level":"info","ts":"2025-11-01T09:23:22.815380Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 09:23:42 up 0 min,  0 users,  load average: 1.06, 0.29, 0.10
	Linux test-preload-168376 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Oct 28 16:58:05 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [7517f73441998747f90dc7695b6c1ce6cf2a2934df2bb4c22a94188b303534bb] <==
	I1101 09:23:23.989934       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 09:23:24.008469       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1101 09:23:24.008575       1 aggregator.go:171] initial CRD sync complete...
	I1101 09:23:24.008597       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 09:23:24.008603       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 09:23:24.008607       1 cache.go:39] Caches are synced for autoregister controller
	I1101 09:23:24.014326       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1101 09:23:24.027817       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 09:23:24.058652       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1101 09:23:24.066604       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1101 09:23:24.066654       1 policy_source.go:240] refreshing policies
	I1101 09:23:24.082390       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 09:23:24.082431       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 09:23:24.082554       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 09:23:24.084584       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 09:23:24.115985       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:23:24.888358       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:23:24.984807       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1101 09:23:25.657533       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1101 09:23:25.696041       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1101 09:23:25.737059       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:23:25.744470       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:23:27.285699       1 controller.go:615] quota admission added evaluator for: endpoints
	I1101 09:23:27.538982       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:23:27.637321       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [6e1286a4966bb461ad33c350cfa6d497c8acfdfa629cf3faaa2869615258c7d3] <==
	I1101 09:23:27.280956       1 shared_informer.go:320] Caches are synced for deployment
	I1101 09:23:27.281062       1 shared_informer.go:320] Caches are synced for PVC protection
	I1101 09:23:27.281798       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I1101 09:23:27.281896       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I1101 09:23:27.282069       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I1101 09:23:27.282070       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1101 09:23:27.282228       1 shared_informer.go:320] Caches are synced for garbage collector
	I1101 09:23:27.282250       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 09:23:27.282257       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 09:23:27.284777       1 shared_informer.go:320] Caches are synced for service account
	I1101 09:23:27.289160       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I1101 09:23:27.293888       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1101 09:23:27.299214       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I1101 09:23:27.312588       1 shared_informer.go:320] Caches are synced for disruption
	I1101 09:23:27.314902       1 shared_informer.go:320] Caches are synced for expand
	I1101 09:23:27.323299       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I1101 09:23:27.327333       1 shared_informer.go:320] Caches are synced for crt configmap
	I1101 09:23:27.327478       1 shared_informer.go:320] Caches are synced for garbage collector
	I1101 09:23:27.337418       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I1101 09:23:27.340873       1 shared_informer.go:320] Caches are synced for resource quota
	I1101 09:23:27.644466       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="350.47654ms"
	I1101 09:23:27.644593       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="50.101µs"
	I1101 09:23:29.062895       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="43.067µs"
	I1101 09:23:29.756476       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="12.599542ms"
	I1101 09:23:29.756668       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="79.865µs"
	
	
	==> kube-proxy [d64acf91c4210bdc19b0f259b557d09debce4f87901b60a6b86d87cbb5e7faa7] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1101 09:23:25.620005       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1101 09:23:25.634360       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["192.168.39.170"]
	E1101 09:23:25.634541       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:23:25.694341       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I1101 09:23:25.694425       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 09:23:25.694452       1 server_linux.go:170] "Using iptables Proxier"
	I1101 09:23:25.700011       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:23:25.700783       1 server.go:497] "Version info" version="v1.32.0"
	I1101 09:23:25.700798       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:23:25.709438       1 config.go:199] "Starting service config controller"
	I1101 09:23:25.709484       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1101 09:23:25.709508       1 config.go:105] "Starting endpoint slice config controller"
	I1101 09:23:25.709512       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1101 09:23:25.711221       1 config.go:329] "Starting node config controller"
	I1101 09:23:25.711248       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1101 09:23:25.809665       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1101 09:23:25.809691       1 shared_informer.go:320] Caches are synced for service config
	I1101 09:23:25.811470       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [75e05aa86afcc117607f1b9b347e71bd21e6e8c06690628fdea64986428c4d66] <==
	I1101 09:23:22.307845       1 serving.go:386] Generated self-signed cert in-memory
	W1101 09:23:23.920985       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 09:23:23.921024       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 09:23:23.921033       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 09:23:23.921044       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 09:23:24.016730       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.0"
	I1101 09:23:24.016797       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:23:24.020880       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:23:24.021015       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1101 09:23:24.022596       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1101 09:23:24.022679       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 09:23:24.121187       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 01 09:23:24 test-preload-168376 kubelet[1160]: I1101 09:23:24.698671    1160 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-test-preload-168376"
	Nov 01 09:23:24 test-preload-168376 kubelet[1160]: E1101 09:23:24.714625    1160 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-test-preload-168376\" already exists" pod="kube-system/kube-scheduler-test-preload-168376"
	Nov 01 09:23:24 test-preload-168376 kubelet[1160]: I1101 09:23:24.714653    1160 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-test-preload-168376"
	Nov 01 09:23:24 test-preload-168376 kubelet[1160]: E1101 09:23:24.722852    1160 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-test-preload-168376\" already exists" pod="kube-system/kube-apiserver-test-preload-168376"
	Nov 01 09:23:24 test-preload-168376 kubelet[1160]: I1101 09:23:24.722925    1160 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-test-preload-168376"
	Nov 01 09:23:24 test-preload-168376 kubelet[1160]: E1101 09:23:24.731730    1160 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-test-preload-168376\" already exists" pod="kube-system/kube-controller-manager-test-preload-168376"
	Nov 01 09:23:24 test-preload-168376 kubelet[1160]: I1101 09:23:24.731758    1160 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-test-preload-168376"
	Nov 01 09:23:24 test-preload-168376 kubelet[1160]: E1101 09:23:24.741212    1160 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-test-preload-168376\" already exists" pod="kube-system/etcd-test-preload-168376"
	Nov 01 09:23:24 test-preload-168376 kubelet[1160]: I1101 09:23:24.890359    1160 apiserver.go:52] "Watching apiserver"
	Nov 01 09:23:24 test-preload-168376 kubelet[1160]: E1101 09:23:24.899654    1160 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-668d6bf9bc-wjj7t" podUID="a8d1c095-21d4-4a44-8dba-65b727cd4e03"
	Nov 01 09:23:24 test-preload-168376 kubelet[1160]: I1101 09:23:24.901762    1160 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Nov 01 09:23:24 test-preload-168376 kubelet[1160]: I1101 09:23:24.978597    1160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/82b5a635-9ddc-406c-8bcb-5fb53c95bad6-tmp\") pod \"storage-provisioner\" (UID: \"82b5a635-9ddc-406c-8bcb-5fb53c95bad6\") " pod="kube-system/storage-provisioner"
	Nov 01 09:23:24 test-preload-168376 kubelet[1160]: I1101 09:23:24.978799    1160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e491c200-c23c-49cb-a67f-7f4d1a0dd161-xtables-lock\") pod \"kube-proxy-4cnd6\" (UID: \"e491c200-c23c-49cb-a67f-7f4d1a0dd161\") " pod="kube-system/kube-proxy-4cnd6"
	Nov 01 09:23:24 test-preload-168376 kubelet[1160]: I1101 09:23:24.978976    1160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e491c200-c23c-49cb-a67f-7f4d1a0dd161-lib-modules\") pod \"kube-proxy-4cnd6\" (UID: \"e491c200-c23c-49cb-a67f-7f4d1a0dd161\") " pod="kube-system/kube-proxy-4cnd6"
	Nov 01 09:23:24 test-preload-168376 kubelet[1160]: E1101 09:23:24.979797    1160 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 01 09:23:24 test-preload-168376 kubelet[1160]: E1101 09:23:24.980113    1160 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a8d1c095-21d4-4a44-8dba-65b727cd4e03-config-volume podName:a8d1c095-21d4-4a44-8dba-65b727cd4e03 nodeName:}" failed. No retries permitted until 2025-11-01 09:23:25.479910945 +0000 UTC m=+6.705635165 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a8d1c095-21d4-4a44-8dba-65b727cd4e03-config-volume") pod "coredns-668d6bf9bc-wjj7t" (UID: "a8d1c095-21d4-4a44-8dba-65b727cd4e03") : object "kube-system"/"coredns" not registered
	Nov 01 09:23:25 test-preload-168376 kubelet[1160]: E1101 09:23:25.484351    1160 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 01 09:23:25 test-preload-168376 kubelet[1160]: E1101 09:23:25.484426    1160 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a8d1c095-21d4-4a44-8dba-65b727cd4e03-config-volume podName:a8d1c095-21d4-4a44-8dba-65b727cd4e03 nodeName:}" failed. No retries permitted until 2025-11-01 09:23:26.484413238 +0000 UTC m=+7.710137466 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a8d1c095-21d4-4a44-8dba-65b727cd4e03-config-volume") pod "coredns-668d6bf9bc-wjj7t" (UID: "a8d1c095-21d4-4a44-8dba-65b727cd4e03") : object "kube-system"/"coredns" not registered
	Nov 01 09:23:26 test-preload-168376 kubelet[1160]: I1101 09:23:26.027111    1160 kubelet_node_status.go:502] "Fast updating node status as it just became ready"
	Nov 01 09:23:26 test-preload-168376 kubelet[1160]: E1101 09:23:26.491321    1160 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 01 09:23:26 test-preload-168376 kubelet[1160]: E1101 09:23:26.491411    1160 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a8d1c095-21d4-4a44-8dba-65b727cd4e03-config-volume podName:a8d1c095-21d4-4a44-8dba-65b727cd4e03 nodeName:}" failed. No retries permitted until 2025-11-01 09:23:28.491397552 +0000 UTC m=+9.717121768 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a8d1c095-21d4-4a44-8dba-65b727cd4e03-config-volume") pod "coredns-668d6bf9bc-wjj7t" (UID: "a8d1c095-21d4-4a44-8dba-65b727cd4e03") : object "kube-system"/"coredns" not registered
	Nov 01 09:23:28 test-preload-168376 kubelet[1160]: E1101 09:23:28.965899    1160 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761989008965531274,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 01 09:23:28 test-preload-168376 kubelet[1160]: E1101 09:23:28.965921    1160 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761989008965531274,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 01 09:23:38 test-preload-168376 kubelet[1160]: E1101 09:23:38.968815    1160 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761989018968474090,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Nov 01 09:23:38 test-preload-168376 kubelet[1160]: E1101 09:23:38.968851    1160 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761989018968474090,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:133495,},InodesUsed:&UInt64Value{Value:64,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [c092fdf0eaecb8c0558c7c4a759a800d38e70a6d0a59f2af5c08e66959cbdf4a] <==
	I1101 09:23:25.466600       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-168376 -n test-preload-168376
helpers_test.go:269: (dbg) Run:  kubectl --context test-preload-168376 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-168376" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-168376
--- FAIL: TestPreload (151.48s)

                                                
                                    
x
+
TestKubernetesUpgrade (986.65s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-133315 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-133315 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (39.652688852s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-133315
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-133315: (1.88489481s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-133315 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-133315 status --format={{.Host}}: exit status 7 (63.394808ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-133315 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-133315 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m14.281924225s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-133315 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-133315 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-133315 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 106 (76.657717ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-133315] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21835
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21835-5912/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5912/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-133315
	    minikube start -p kubernetes-upgrade-133315 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1333152 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-133315 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-133315 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-133315 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 80 (14m27.954355897s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-133315] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21835
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21835-5912/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5912/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "kubernetes-upgrade-133315" primary control-plane node in "kubernetes-upgrade-133315" cluster
	* Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:27:29.560252   36048 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:27:29.560527   36048 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:27:29.560538   36048 out.go:374] Setting ErrFile to fd 2...
	I1101 09:27:29.560544   36048 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:27:29.560737   36048 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5912/.minikube/bin
	I1101 09:27:29.561175   36048 out.go:368] Setting JSON to false
	I1101 09:27:29.562005   36048 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4197,"bootTime":1761985053,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:27:29.562100   36048 start.go:143] virtualization: kvm guest
	I1101 09:27:29.564177   36048 out.go:179] * [kubernetes-upgrade-133315] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:27:29.565576   36048 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 09:27:29.565557   36048 notify.go:221] Checking for updates...
	I1101 09:27:29.567520   36048 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:27:29.568886   36048 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-5912/kubeconfig
	I1101 09:27:29.570108   36048 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5912/.minikube
	I1101 09:27:29.571275   36048 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:27:29.572470   36048 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:27:29.573928   36048 config.go:182] Loaded profile config "kubernetes-upgrade-133315": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:27:29.574391   36048 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:27:29.607593   36048 out.go:179] * Using the kvm2 driver based on existing profile
	I1101 09:27:29.608997   36048 start.go:309] selected driver: kvm2
	I1101 09:27:29.609015   36048 start.go:930] validating driver "kvm2" against &{Name:kubernetes-upgrade-133315 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-133315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:27:29.609111   36048 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:27:29.610032   36048 cni.go:84] Creating CNI manager for ""
	I1101 09:27:29.610086   36048 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 09:27:29.610114   36048 start.go:353] cluster config:
	{Name:kubernetes-upgrade-133315 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-133315 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:27:29.610196   36048 iso.go:125] acquiring lock: {Name:mk345092679db7c379cbaa00125c4f18e2b4a125 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:27:29.611815   36048 out.go:179] * Starting "kubernetes-upgrade-133315" primary control-plane node in "kubernetes-upgrade-133315" cluster
	I1101 09:27:29.612927   36048 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:27:29.612955   36048 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-5912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 09:27:29.612961   36048 cache.go:59] Caching tarball of preloaded images
	I1101 09:27:29.613046   36048 preload.go:233] Found /home/jenkins/minikube-integration/21835-5912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 09:27:29.613066   36048 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:27:29.613146   36048 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/kubernetes-upgrade-133315/config.json ...
	I1101 09:27:29.613343   36048 start.go:360] acquireMachinesLock for kubernetes-upgrade-133315: {Name:mk8049b4e421873947dfa0bcd96201ccb1e1825c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 09:27:57.818187   36048 start.go:364] duration metric: took 28.20480567s to acquireMachinesLock for "kubernetes-upgrade-133315"
	I1101 09:27:57.818282   36048 start.go:96] Skipping create...Using existing machine configuration
	I1101 09:27:57.818294   36048 fix.go:54] fixHost starting: 
	I1101 09:27:57.820847   36048 fix.go:112] recreateIfNeeded on kubernetes-upgrade-133315: state=Running err=<nil>
	W1101 09:27:57.820884   36048 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 09:27:57.922421   36048 out.go:252] * Updating the running kvm2 "kubernetes-upgrade-133315" VM ...
	I1101 09:27:57.922481   36048 machine.go:94] provisionDockerMachine start ...
	I1101 09:27:57.926181   36048 main.go:143] libmachine: domain kubernetes-upgrade-133315 has defined MAC address 52:54:00:33:2d:67 in network mk-kubernetes-upgrade-133315
	I1101 09:27:57.926804   36048 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:33:2d:67", ip: ""} in network mk-kubernetes-upgrade-133315: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:07 +0000 UTC Type:0 Mac:52:54:00:33:2d:67 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:kubernetes-upgrade-133315 Clientid:01:52:54:00:33:2d:67}
	I1101 09:27:57.926843   36048 main.go:143] libmachine: domain kubernetes-upgrade-133315 has defined IP address 192.168.39.77 and MAC address 52:54:00:33:2d:67 in network mk-kubernetes-upgrade-133315
	I1101 09:27:57.927109   36048 main.go:143] libmachine: Using SSH client type: native
	I1101 09:27:57.927433   36048 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I1101 09:27:57.927455   36048 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:27:58.056949   36048 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-133315
	
	I1101 09:27:58.056994   36048 buildroot.go:166] provisioning hostname "kubernetes-upgrade-133315"
	I1101 09:27:58.060859   36048 main.go:143] libmachine: domain kubernetes-upgrade-133315 has defined MAC address 52:54:00:33:2d:67 in network mk-kubernetes-upgrade-133315
	I1101 09:27:58.061423   36048 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:33:2d:67", ip: ""} in network mk-kubernetes-upgrade-133315: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:07 +0000 UTC Type:0 Mac:52:54:00:33:2d:67 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:kubernetes-upgrade-133315 Clientid:01:52:54:00:33:2d:67}
	I1101 09:27:58.061464   36048 main.go:143] libmachine: domain kubernetes-upgrade-133315 has defined IP address 192.168.39.77 and MAC address 52:54:00:33:2d:67 in network mk-kubernetes-upgrade-133315
	I1101 09:27:58.061737   36048 main.go:143] libmachine: Using SSH client type: native
	I1101 09:27:58.061978   36048 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I1101 09:27:58.061995   36048 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-133315 && echo "kubernetes-upgrade-133315" | sudo tee /etc/hostname
	I1101 09:27:58.215182   36048 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-133315
	
	I1101 09:27:58.218910   36048 main.go:143] libmachine: domain kubernetes-upgrade-133315 has defined MAC address 52:54:00:33:2d:67 in network mk-kubernetes-upgrade-133315
	I1101 09:27:58.219475   36048 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:33:2d:67", ip: ""} in network mk-kubernetes-upgrade-133315: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:07 +0000 UTC Type:0 Mac:52:54:00:33:2d:67 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:kubernetes-upgrade-133315 Clientid:01:52:54:00:33:2d:67}
	I1101 09:27:58.219506   36048 main.go:143] libmachine: domain kubernetes-upgrade-133315 has defined IP address 192.168.39.77 and MAC address 52:54:00:33:2d:67 in network mk-kubernetes-upgrade-133315
	I1101 09:27:58.219712   36048 main.go:143] libmachine: Using SSH client type: native
	I1101 09:27:58.219973   36048 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I1101 09:27:58.219993   36048 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-133315' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-133315/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-133315' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:27:58.348785   36048 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:27:58.348844   36048 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21835-5912/.minikube CaCertPath:/home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21835-5912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21835-5912/.minikube}
	I1101 09:27:58.348863   36048 buildroot.go:174] setting up certificates
	I1101 09:27:58.348871   36048 provision.go:84] configureAuth start
	I1101 09:27:58.352247   36048 main.go:143] libmachine: domain kubernetes-upgrade-133315 has defined MAC address 52:54:00:33:2d:67 in network mk-kubernetes-upgrade-133315
	I1101 09:27:58.352685   36048 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:33:2d:67", ip: ""} in network mk-kubernetes-upgrade-133315: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:07 +0000 UTC Type:0 Mac:52:54:00:33:2d:67 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:kubernetes-upgrade-133315 Clientid:01:52:54:00:33:2d:67}
	I1101 09:27:58.352720   36048 main.go:143] libmachine: domain kubernetes-upgrade-133315 has defined IP address 192.168.39.77 and MAC address 52:54:00:33:2d:67 in network mk-kubernetes-upgrade-133315
	I1101 09:27:58.355081   36048 main.go:143] libmachine: domain kubernetes-upgrade-133315 has defined MAC address 52:54:00:33:2d:67 in network mk-kubernetes-upgrade-133315
	I1101 09:27:58.355520   36048 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:33:2d:67", ip: ""} in network mk-kubernetes-upgrade-133315: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:07 +0000 UTC Type:0 Mac:52:54:00:33:2d:67 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:kubernetes-upgrade-133315 Clientid:01:52:54:00:33:2d:67}
	I1101 09:27:58.355552   36048 main.go:143] libmachine: domain kubernetes-upgrade-133315 has defined IP address 192.168.39.77 and MAC address 52:54:00:33:2d:67 in network mk-kubernetes-upgrade-133315
	I1101 09:27:58.355706   36048 provision.go:143] copyHostCerts
	I1101 09:27:58.355767   36048 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5912/.minikube/ca.pem, removing ...
	I1101 09:27:58.355796   36048 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5912/.minikube/ca.pem
	I1101 09:27:58.355872   36048 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21835-5912/.minikube/ca.pem (1082 bytes)
	I1101 09:27:58.356047   36048 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5912/.minikube/cert.pem, removing ...
	I1101 09:27:58.356060   36048 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5912/.minikube/cert.pem
	I1101 09:27:58.356086   36048 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21835-5912/.minikube/cert.pem (1123 bytes)
	I1101 09:27:58.356171   36048 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5912/.minikube/key.pem, removing ...
	I1101 09:27:58.356181   36048 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5912/.minikube/key.pem
	I1101 09:27:58.356233   36048 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21835-5912/.minikube/key.pem (1679 bytes)
	I1101 09:27:58.356307   36048 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21835-5912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-133315 san=[127.0.0.1 192.168.39.77 kubernetes-upgrade-133315 localhost minikube]
	I1101 09:27:58.419258   36048 provision.go:177] copyRemoteCerts
	I1101 09:27:58.419309   36048 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:27:58.421763   36048 main.go:143] libmachine: domain kubernetes-upgrade-133315 has defined MAC address 52:54:00:33:2d:67 in network mk-kubernetes-upgrade-133315
	I1101 09:27:58.422146   36048 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:33:2d:67", ip: ""} in network mk-kubernetes-upgrade-133315: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:07 +0000 UTC Type:0 Mac:52:54:00:33:2d:67 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:kubernetes-upgrade-133315 Clientid:01:52:54:00:33:2d:67}
	I1101 09:27:58.422169   36048 main.go:143] libmachine: domain kubernetes-upgrade-133315 has defined IP address 192.168.39.77 and MAC address 52:54:00:33:2d:67 in network mk-kubernetes-upgrade-133315
	I1101 09:27:58.422357   36048 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/kubernetes-upgrade-133315/id_rsa Username:docker}
	I1101 09:27:58.518749   36048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 09:27:58.560317   36048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1101 09:27:58.686478   36048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:27:58.724376   36048 provision.go:87] duration metric: took 375.490534ms to configureAuth
	I1101 09:27:58.724409   36048 buildroot.go:189] setting minikube options for container-runtime
	I1101 09:27:58.784297   36048 config.go:182] Loaded profile config "kubernetes-upgrade-133315": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:27:58.787393   36048 main.go:143] libmachine: domain kubernetes-upgrade-133315 has defined MAC address 52:54:00:33:2d:67 in network mk-kubernetes-upgrade-133315
	I1101 09:27:58.787878   36048 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:33:2d:67", ip: ""} in network mk-kubernetes-upgrade-133315: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:07 +0000 UTC Type:0 Mac:52:54:00:33:2d:67 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:kubernetes-upgrade-133315 Clientid:01:52:54:00:33:2d:67}
	I1101 09:27:58.787917   36048 main.go:143] libmachine: domain kubernetes-upgrade-133315 has defined IP address 192.168.39.77 and MAC address 52:54:00:33:2d:67 in network mk-kubernetes-upgrade-133315
	I1101 09:27:58.788112   36048 main.go:143] libmachine: Using SSH client type: native
	I1101 09:27:58.788322   36048 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I1101 09:27:58.788336   36048 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:28:04.580091   36048 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:28:04.580121   36048 machine.go:97] duration metric: took 6.657632186s to provisionDockerMachine
	I1101 09:28:04.580160   36048 start.go:293] postStartSetup for "kubernetes-upgrade-133315" (driver="kvm2")
	I1101 09:28:04.580172   36048 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:28:04.580256   36048 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:28:04.583868   36048 main.go:143] libmachine: domain kubernetes-upgrade-133315 has defined MAC address 52:54:00:33:2d:67 in network mk-kubernetes-upgrade-133315
	I1101 09:28:04.584435   36048 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:33:2d:67", ip: ""} in network mk-kubernetes-upgrade-133315: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:07 +0000 UTC Type:0 Mac:52:54:00:33:2d:67 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:kubernetes-upgrade-133315 Clientid:01:52:54:00:33:2d:67}
	I1101 09:28:04.584475   36048 main.go:143] libmachine: domain kubernetes-upgrade-133315 has defined IP address 192.168.39.77 and MAC address 52:54:00:33:2d:67 in network mk-kubernetes-upgrade-133315
	I1101 09:28:04.584703   36048 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/kubernetes-upgrade-133315/id_rsa Username:docker}
	I1101 09:28:04.684517   36048 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:28:04.697758   36048 info.go:137] Remote host: Buildroot 2025.02
	I1101 09:28:04.697790   36048 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5912/.minikube/addons for local assets ...
	I1101 09:28:04.697872   36048 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5912/.minikube/files for local assets ...
	I1101 09:28:04.697965   36048 filesync.go:149] local asset: /home/jenkins/minikube-integration/21835-5912/.minikube/files/etc/ssl/certs/97932.pem -> 97932.pem in /etc/ssl/certs
	I1101 09:28:04.698104   36048 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:28:04.764849   36048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/files/etc/ssl/certs/97932.pem --> /etc/ssl/certs/97932.pem (1708 bytes)
	I1101 09:28:04.825790   36048 start.go:296] duration metric: took 245.614962ms for postStartSetup
	I1101 09:28:04.825836   36048 fix.go:56] duration metric: took 7.007542449s for fixHost
	I1101 09:28:04.829583   36048 main.go:143] libmachine: domain kubernetes-upgrade-133315 has defined MAC address 52:54:00:33:2d:67 in network mk-kubernetes-upgrade-133315
	I1101 09:28:04.830075   36048 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:33:2d:67", ip: ""} in network mk-kubernetes-upgrade-133315: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:07 +0000 UTC Type:0 Mac:52:54:00:33:2d:67 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:kubernetes-upgrade-133315 Clientid:01:52:54:00:33:2d:67}
	I1101 09:28:04.830113   36048 main.go:143] libmachine: domain kubernetes-upgrade-133315 has defined IP address 192.168.39.77 and MAC address 52:54:00:33:2d:67 in network mk-kubernetes-upgrade-133315
	I1101 09:28:04.830338   36048 main.go:143] libmachine: Using SSH client type: native
	I1101 09:28:04.830640   36048 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.39.77 22 <nil> <nil>}
	I1101 09:28:04.830653   36048 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1101 09:28:05.007787   36048 main.go:143] libmachine: SSH cmd err, output: <nil>: 1761989284.999895397
	
	I1101 09:28:05.007812   36048 fix.go:216] guest clock: 1761989284.999895397
	I1101 09:28:05.007866   36048 fix.go:229] Guest: 2025-11-01 09:28:04.999895397 +0000 UTC Remote: 2025-11-01 09:28:04.825840422 +0000 UTC m=+35.314892939 (delta=174.054975ms)
	I1101 09:28:05.007885   36048 fix.go:200] guest clock delta is within tolerance: 174.054975ms
	I1101 09:28:05.007891   36048 start.go:83] releasing machines lock for "kubernetes-upgrade-133315", held for 7.189654909s
	I1101 09:28:05.011951   36048 main.go:143] libmachine: domain kubernetes-upgrade-133315 has defined MAC address 52:54:00:33:2d:67 in network mk-kubernetes-upgrade-133315
	I1101 09:28:05.012544   36048 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:33:2d:67", ip: ""} in network mk-kubernetes-upgrade-133315: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:07 +0000 UTC Type:0 Mac:52:54:00:33:2d:67 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:kubernetes-upgrade-133315 Clientid:01:52:54:00:33:2d:67}
	I1101 09:28:05.012591   36048 main.go:143] libmachine: domain kubernetes-upgrade-133315 has defined IP address 192.168.39.77 and MAC address 52:54:00:33:2d:67 in network mk-kubernetes-upgrade-133315
	I1101 09:28:05.013325   36048 ssh_runner.go:195] Run: cat /version.json
	I1101 09:28:05.013404   36048 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:28:05.017525   36048 main.go:143] libmachine: domain kubernetes-upgrade-133315 has defined MAC address 52:54:00:33:2d:67 in network mk-kubernetes-upgrade-133315
	I1101 09:28:05.017963   36048 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:33:2d:67", ip: ""} in network mk-kubernetes-upgrade-133315: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:07 +0000 UTC Type:0 Mac:52:54:00:33:2d:67 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:kubernetes-upgrade-133315 Clientid:01:52:54:00:33:2d:67}
	I1101 09:28:05.017983   36048 main.go:143] libmachine: domain kubernetes-upgrade-133315 has defined MAC address 52:54:00:33:2d:67 in network mk-kubernetes-upgrade-133315
	I1101 09:28:05.017999   36048 main.go:143] libmachine: domain kubernetes-upgrade-133315 has defined IP address 192.168.39.77 and MAC address 52:54:00:33:2d:67 in network mk-kubernetes-upgrade-133315
	I1101 09:28:05.018283   36048 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/kubernetes-upgrade-133315/id_rsa Username:docker}
	I1101 09:28:05.018636   36048 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:33:2d:67", ip: ""} in network mk-kubernetes-upgrade-133315: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:07 +0000 UTC Type:0 Mac:52:54:00:33:2d:67 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:kubernetes-upgrade-133315 Clientid:01:52:54:00:33:2d:67}
	I1101 09:28:05.018670   36048 main.go:143] libmachine: domain kubernetes-upgrade-133315 has defined IP address 192.168.39.77 and MAC address 52:54:00:33:2d:67 in network mk-kubernetes-upgrade-133315
	I1101 09:28:05.018834   36048 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/kubernetes-upgrade-133315/id_rsa Username:docker}
	I1101 09:28:05.161426   36048 ssh_runner.go:195] Run: systemctl --version
	I1101 09:28:05.214867   36048 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:28:05.450035   36048 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:28:05.463717   36048 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:28:05.463807   36048 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:28:05.491301   36048 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 09:28:05.491332   36048 start.go:496] detecting cgroup driver to use...
	I1101 09:28:05.491403   36048 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:28:05.552637   36048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:28:05.613420   36048 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:28:05.613495   36048 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:28:05.663672   36048 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:28:05.700704   36048 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:28:06.111576   36048 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:28:06.410527   36048 docker.go:234] disabling docker service ...
	I1101 09:28:06.410599   36048 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:28:06.481276   36048 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:28:06.510709   36048 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:28:06.760884   36048 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:28:07.030760   36048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:28:07.059306   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:28:07.096611   36048 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:28:07.096692   36048 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:28:07.113273   36048 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 09:28:07.113345   36048 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:28:07.131837   36048 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:28:07.151699   36048 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:28:07.171469   36048 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:28:07.196577   36048 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:28:07.224070   36048 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:28:07.241570   36048 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:28:07.260074   36048 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:28:07.276044   36048 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:28:07.294042   36048 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:28:07.515112   36048 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:29:37.886346   36048 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.371171103s)
	I1101 09:29:37.886396   36048 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:29:37.886466   36048 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:29:37.893374   36048 start.go:564] Will wait 60s for crictl version
	I1101 09:29:37.893453   36048 ssh_runner.go:195] Run: which crictl
	I1101 09:29:37.898479   36048 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 09:29:37.942395   36048 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1101 09:29:37.942523   36048 ssh_runner.go:195] Run: crio --version
	I1101 09:29:37.975866   36048 ssh_runner.go:195] Run: crio --version
	I1101 09:29:38.011862   36048 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1101 09:29:38.016647   36048 main.go:143] libmachine: domain kubernetes-upgrade-133315 has defined MAC address 52:54:00:33:2d:67 in network mk-kubernetes-upgrade-133315
	I1101 09:29:38.017181   36048 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:33:2d:67", ip: ""} in network mk-kubernetes-upgrade-133315: {Iface:virbr1 ExpiryTime:2025-11-01 10:27:07 +0000 UTC Type:0 Mac:52:54:00:33:2d:67 Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:kubernetes-upgrade-133315 Clientid:01:52:54:00:33:2d:67}
	I1101 09:29:38.017236   36048 main.go:143] libmachine: domain kubernetes-upgrade-133315 has defined IP address 192.168.39.77 and MAC address 52:54:00:33:2d:67 in network mk-kubernetes-upgrade-133315
	I1101 09:29:38.017489   36048 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1101 09:29:38.024006   36048 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-133315 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.34.1 ClusterName:kubernetes-upgrade-133315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:29:38.024131   36048 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:29:38.024196   36048 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:29:38.077446   36048 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:29:38.077480   36048 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:29:38.077544   36048 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:29:38.115815   36048 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:29:38.115844   36048 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:29:38.115854   36048 kubeadm.go:935] updating node { 192.168.39.77 8443 v1.34.1 crio true true} ...
	I1101 09:29:38.115984   36048 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-133315 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.77
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-133315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:29:38.116082   36048 ssh_runner.go:195] Run: crio config
	I1101 09:29:38.181923   36048 cni.go:84] Creating CNI manager for ""
	I1101 09:29:38.181945   36048 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 09:29:38.181959   36048 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:29:38.181979   36048 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.77 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-133315 NodeName:kubernetes-upgrade-133315 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.77"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.77 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:29:38.182107   36048 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.77
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-133315"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.77"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.77"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:29:38.182169   36048 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:29:38.194950   36048 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:29:38.195018   36048 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:29:38.207308   36048 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I1101 09:29:38.229054   36048 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:29:38.253332   36048 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1101 09:29:38.278058   36048 ssh_runner.go:195] Run: grep 192.168.39.77	control-plane.minikube.internal$ /etc/hosts
	I1101 09:29:38.284268   36048 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:29:38.462706   36048 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:29:38.480758   36048 certs.go:69] Setting up /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/kubernetes-upgrade-133315 for IP: 192.168.39.77
	I1101 09:29:38.480779   36048 certs.go:195] generating shared ca certs ...
	I1101 09:29:38.480793   36048 certs.go:227] acquiring lock for ca certs: {Name:mk23a33d19209ad24f4406326ada43ab5cb57960 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:29:38.480948   36048 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21835-5912/.minikube/ca.key
	I1101 09:29:38.480989   36048 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21835-5912/.minikube/proxy-client-ca.key
	I1101 09:29:38.480999   36048 certs.go:257] generating profile certs ...
	I1101 09:29:38.481067   36048 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/kubernetes-upgrade-133315/client.key
	I1101 09:29:38.481114   36048 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/kubernetes-upgrade-133315/apiserver.key.d90b93cf
	I1101 09:29:38.481148   36048 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/kubernetes-upgrade-133315/proxy-client.key
	I1101 09:29:38.481269   36048 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/9793.pem (1338 bytes)
	W1101 09:29:38.481306   36048 certs.go:480] ignoring /home/jenkins/minikube-integration/21835-5912/.minikube/certs/9793_empty.pem, impossibly tiny 0 bytes
	I1101 09:29:38.481316   36048 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 09:29:38.481338   36048 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca.pem (1082 bytes)
	I1101 09:29:38.481360   36048 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:29:38.481382   36048 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/key.pem (1679 bytes)
	I1101 09:29:38.481421   36048 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5912/.minikube/files/etc/ssl/certs/97932.pem (1708 bytes)
	I1101 09:29:38.482119   36048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:29:38.511841   36048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 09:29:38.540326   36048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:29:38.568751   36048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 09:29:38.597631   36048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/kubernetes-upgrade-133315/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1101 09:29:38.626507   36048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/kubernetes-upgrade-133315/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 09:29:38.654942   36048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/kubernetes-upgrade-133315/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:29:38.684305   36048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/kubernetes-upgrade-133315/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 09:29:38.714010   36048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:29:38.743614   36048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/certs/9793.pem --> /usr/share/ca-certificates/9793.pem (1338 bytes)
	I1101 09:29:38.774892   36048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/files/etc/ssl/certs/97932.pem --> /usr/share/ca-certificates/97932.pem (1708 bytes)
	I1101 09:29:38.804295   36048 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:29:38.830517   36048 ssh_runner.go:195] Run: openssl version
	I1101 09:29:38.837195   36048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9793.pem && ln -fs /usr/share/ca-certificates/9793.pem /etc/ssl/certs/9793.pem"
	I1101 09:29:38.855330   36048 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9793.pem
	I1101 09:29:38.861561   36048 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 08:38 /usr/share/ca-certificates/9793.pem
	I1101 09:29:38.861627   36048 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9793.pem
	I1101 09:29:38.872397   36048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9793.pem /etc/ssl/certs/51391683.0"
	I1101 09:29:38.887675   36048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/97932.pem && ln -fs /usr/share/ca-certificates/97932.pem /etc/ssl/certs/97932.pem"
	I1101 09:29:38.905635   36048 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/97932.pem
	I1101 09:29:38.912166   36048 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 08:38 /usr/share/ca-certificates/97932.pem
	I1101 09:29:38.912261   36048 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/97932.pem
	I1101 09:29:38.922725   36048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/97932.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:29:38.940147   36048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:29:38.960368   36048 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:29:38.967431   36048 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:29:38.967510   36048 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:29:38.975471   36048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:29:38.987782   36048 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:29:38.993157   36048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 09:29:39.000851   36048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 09:29:39.008483   36048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 09:29:39.016045   36048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 09:29:39.025009   36048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 09:29:39.032443   36048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 09:29:39.040264   36048 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-133315 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.34.1 ClusterName:kubernetes-upgrade-133315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.77 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:29:39.040374   36048 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:29:39.040450   36048 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:29:39.109773   36048 cri.go:89] found id: "829a7049ca8686e23aa5a0266e3f8e6b74f1e010bd31bb273affd0bf84d915be"
	I1101 09:29:39.109800   36048 cri.go:89] found id: "c7f0148e52e4924e35347f63207cf72eee31c760c186643e54000ca0a18006b6"
	I1101 09:29:39.109812   36048 cri.go:89] found id: "348c9b789cc0f92cfaa6dd048a3a1b36c818dfe749ca3bf33636fd5fc4316908"
	I1101 09:29:39.109817   36048 cri.go:89] found id: "4a827f9d08c79b703503f3602ae37b95dc8f688f3ef925017a1e492167c41558"
	I1101 09:29:39.109822   36048 cri.go:89] found id: "7e941e87349a482f4e42e679bd24e6e37039f1cadf1af39294ca2fa9640c32b0"
	I1101 09:29:39.109827   36048 cri.go:89] found id: "705a325c05b6c0085e7ff8b3e258e47d9fe8412452bed3638031e0f6c545de9d"
	I1101 09:29:39.109831   36048 cri.go:89] found id: "7dc245ce0bf97fe8174bc9ec6748e8ec4b72abec4f546f2269a53dad53e7f9e8"
	I1101 09:29:39.109834   36048 cri.go:89] found id: "ec276429489fbb3ef46fe755286372ce2c7886402d27c811567093cb74576ed3"
	I1101 09:29:39.109838   36048 cri.go:89] found id: "1ec289eac838ae5e985dea1aa329f62406fedee085860b4ca466adb1a5b38930"
	I1101 09:29:39.109848   36048 cri.go:89] found id: "936fd2ed292ea158a66e321e5e5c589755deca7d69ab01e0bbea8be585b766ea"
	I1101 09:29:39.109852   36048 cri.go:89] found id: "f9dd7881e6bb63603f4ed3e4da6c514d13e60e1ead5871d13f37ae6ff8ec3f92"
	I1101 09:29:39.109856   36048 cri.go:89] found id: ""
	I1101 09:29:39.109914   36048 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
version_upgrade_test.go:277: start after failed upgrade: out/minikube-linux-amd64 start -p kubernetes-upgrade-133315 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 80
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-11-01 09:41:57.468467824 +0000 UTC m=+4369.886164967
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-133315 -n kubernetes-upgrade-133315
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-133315 -n kubernetes-upgrade-133315: exit status 2 (231.382293ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-133315 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-133315 logs -n 25: (1.141743421s)
helpers_test.go:260: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │        PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p custom-flannel-997526 sudo systemctl status kubelet --all --full --no-pager                                                       │ custom-flannel-997526 │ jenkins │ v1.37.0 │ 01 Nov 25 09:41 UTC │ 01 Nov 25 09:41 UTC │
	│ ssh     │ -p custom-flannel-997526 sudo systemctl cat kubelet --no-pager                                                                       │ custom-flannel-997526 │ jenkins │ v1.37.0 │ 01 Nov 25 09:41 UTC │ 01 Nov 25 09:41 UTC │
	│ ssh     │ -p custom-flannel-997526 sudo journalctl -xeu kubelet --all --full --no-pager                                                        │ custom-flannel-997526 │ jenkins │ v1.37.0 │ 01 Nov 25 09:41 UTC │ 01 Nov 25 09:41 UTC │
	│ ssh     │ -p custom-flannel-997526 sudo cat /etc/kubernetes/kubelet.conf                                                                       │ custom-flannel-997526 │ jenkins │ v1.37.0 │ 01 Nov 25 09:41 UTC │ 01 Nov 25 09:41 UTC │
	│ ssh     │ -p custom-flannel-997526 sudo cat /var/lib/kubelet/config.yaml                                                                       │ custom-flannel-997526 │ jenkins │ v1.37.0 │ 01 Nov 25 09:41 UTC │ 01 Nov 25 09:41 UTC │
	│ ssh     │ -p custom-flannel-997526 sudo systemctl status docker --all --full --no-pager                                                        │ custom-flannel-997526 │ jenkins │ v1.37.0 │ 01 Nov 25 09:41 UTC │                     │
	│ ssh     │ -p custom-flannel-997526 sudo systemctl cat docker --no-pager                                                                        │ custom-flannel-997526 │ jenkins │ v1.37.0 │ 01 Nov 25 09:41 UTC │ 01 Nov 25 09:41 UTC │
	│ ssh     │ -p custom-flannel-997526 sudo cat /etc/docker/daemon.json                                                                            │ custom-flannel-997526 │ jenkins │ v1.37.0 │ 01 Nov 25 09:41 UTC │ 01 Nov 25 09:41 UTC │
	│ ssh     │ -p custom-flannel-997526 sudo docker system info                                                                                     │ custom-flannel-997526 │ jenkins │ v1.37.0 │ 01 Nov 25 09:41 UTC │                     │
	│ ssh     │ -p custom-flannel-997526 sudo systemctl status cri-docker --all --full --no-pager                                                    │ custom-flannel-997526 │ jenkins │ v1.37.0 │ 01 Nov 25 09:41 UTC │                     │
	│ ssh     │ -p custom-flannel-997526 sudo systemctl cat cri-docker --no-pager                                                                    │ custom-flannel-997526 │ jenkins │ v1.37.0 │ 01 Nov 25 09:41 UTC │ 01 Nov 25 09:41 UTC │
	│ ssh     │ -p custom-flannel-997526 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                               │ custom-flannel-997526 │ jenkins │ v1.37.0 │ 01 Nov 25 09:41 UTC │                     │
	│ ssh     │ -p custom-flannel-997526 sudo cat /usr/lib/systemd/system/cri-docker.service                                                         │ custom-flannel-997526 │ jenkins │ v1.37.0 │ 01 Nov 25 09:41 UTC │ 01 Nov 25 09:41 UTC │
	│ ssh     │ -p custom-flannel-997526 sudo cri-dockerd --version                                                                                  │ custom-flannel-997526 │ jenkins │ v1.37.0 │ 01 Nov 25 09:41 UTC │ 01 Nov 25 09:41 UTC │
	│ ssh     │ -p custom-flannel-997526 sudo systemctl status containerd --all --full --no-pager                                                    │ custom-flannel-997526 │ jenkins │ v1.37.0 │ 01 Nov 25 09:41 UTC │                     │
	│ ssh     │ -p custom-flannel-997526 sudo systemctl cat containerd --no-pager                                                                    │ custom-flannel-997526 │ jenkins │ v1.37.0 │ 01 Nov 25 09:41 UTC │ 01 Nov 25 09:41 UTC │
	│ ssh     │ -p custom-flannel-997526 sudo cat /lib/systemd/system/containerd.service                                                             │ custom-flannel-997526 │ jenkins │ v1.37.0 │ 01 Nov 25 09:41 UTC │ 01 Nov 25 09:41 UTC │
	│ ssh     │ -p custom-flannel-997526 sudo cat /etc/containerd/config.toml                                                                        │ custom-flannel-997526 │ jenkins │ v1.37.0 │ 01 Nov 25 09:41 UTC │ 01 Nov 25 09:41 UTC │
	│ ssh     │ -p custom-flannel-997526 sudo containerd config dump                                                                                 │ custom-flannel-997526 │ jenkins │ v1.37.0 │ 01 Nov 25 09:41 UTC │ 01 Nov 25 09:41 UTC │
	│ ssh     │ -p custom-flannel-997526 sudo systemctl status crio --all --full --no-pager                                                          │ custom-flannel-997526 │ jenkins │ v1.37.0 │ 01 Nov 25 09:41 UTC │ 01 Nov 25 09:41 UTC │
	│ ssh     │ -p custom-flannel-997526 sudo systemctl cat crio --no-pager                                                                          │ custom-flannel-997526 │ jenkins │ v1.37.0 │ 01 Nov 25 09:41 UTC │ 01 Nov 25 09:41 UTC │
	│ ssh     │ -p custom-flannel-997526 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                │ custom-flannel-997526 │ jenkins │ v1.37.0 │ 01 Nov 25 09:41 UTC │ 01 Nov 25 09:41 UTC │
	│ ssh     │ -p custom-flannel-997526 sudo crio config                                                                                            │ custom-flannel-997526 │ jenkins │ v1.37.0 │ 01 Nov 25 09:41 UTC │ 01 Nov 25 09:41 UTC │
	│ delete  │ -p custom-flannel-997526                                                                                                             │ custom-flannel-997526 │ jenkins │ v1.37.0 │ 01 Nov 25 09:41 UTC │ 01 Nov 25 09:41 UTC │
	│ start   │ -p bridge-997526 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio │ bridge-997526         │ jenkins │ v1.37.0 │ 01 Nov 25 09:41 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:41:39
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:41:39.452200   50360 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:41:39.452581   50360 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:41:39.452597   50360 out.go:374] Setting ErrFile to fd 2...
	I1101 09:41:39.452604   50360 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:41:39.452895   50360 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5912/.minikube/bin
	I1101 09:41:39.453647   50360 out.go:368] Setting JSON to false
	I1101 09:41:39.455041   50360 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5046,"bootTime":1761985053,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:41:39.455183   50360 start.go:143] virtualization: kvm guest
	I1101 09:41:39.457773   50360 out.go:179] * [bridge-997526] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:41:39.459271   50360 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 09:41:39.459272   50360 notify.go:221] Checking for updates...
	I1101 09:41:39.460658   50360 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:41:39.461958   50360 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-5912/kubeconfig
	I1101 09:41:39.463153   50360 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5912/.minikube
	I1101 09:41:39.464441   50360 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:41:39.465732   50360 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:41:35.329461   48538 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 09:41:35.672987   48538 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 09:41:35.831734   48538 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 09:41:36.501859   48538 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 09:41:36.502109   48538 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [enable-default-cni-997526 localhost] and IPs [192.168.50.176 127.0.0.1 ::1]
	I1101 09:41:36.767944   48538 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 09:41:36.768200   48538 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [enable-default-cni-997526 localhost] and IPs [192.168.50.176 127.0.0.1 ::1]
	I1101 09:41:37.266715   48538 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 09:41:37.593083   48538 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 09:41:37.854610   48538 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 09:41:37.854887   48538 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 09:41:38.002612   48538 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 09:41:38.543189   48538 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 09:41:38.860663   48538 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 09:41:39.233697   48538 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 09:41:39.482688   48538 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 09:41:39.483250   48538 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 09:41:39.486201   48538 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 09:41:39.467595   50360 config.go:182] Loaded profile config "enable-default-cni-997526": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:41:39.467737   50360 config.go:182] Loaded profile config "flannel-997526": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:41:39.467871   50360 config.go:182] Loaded profile config "guest-649821": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1101 09:41:39.468000   50360 config.go:182] Loaded profile config "kubernetes-upgrade-133315": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:41:39.468136   50360 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:41:39.515690   50360 out.go:179] * Using the kvm2 driver based on user configuration
	I1101 09:41:39.517062   50360 start.go:309] selected driver: kvm2
	I1101 09:41:39.517093   50360 start.go:930] validating driver "kvm2" against <nil>
	I1101 09:41:39.517110   50360 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:41:39.517864   50360 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 09:41:39.518134   50360 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:41:39.518166   50360 cni.go:84] Creating CNI manager for "bridge"
	I1101 09:41:39.518172   50360 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1101 09:41:39.518243   50360 start.go:353] cluster config:
	{Name:bridge-997526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:bridge-997526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1101 09:41:39.518366   50360 iso.go:125] acquiring lock: {Name:mk345092679db7c379cbaa00125c4f18e2b4a125 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:41:39.520191   50360 out.go:179] * Starting "bridge-997526" primary control-plane node in "bridge-997526" cluster
	I1101 09:41:39.487962   48538 out.go:252]   - Booting up control plane ...
	I1101 09:41:39.488082   48538 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 09:41:39.488187   48538 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 09:41:39.489114   48538 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 09:41:39.513350   48538 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 09:41:39.513516   48538 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 09:41:39.525285   48538 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 09:41:39.525392   48538 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 09:41:39.525475   48538 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 09:41:39.695751   48538 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 09:41:39.695966   48538 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 09:41:36.686435   49576 main.go:143] libmachine: domain flannel-997526 has defined MAC address 52:54:00:01:4f:7e in network mk-flannel-997526
	I1101 09:41:36.687136   49576 main.go:143] libmachine: no network interface addresses found for domain flannel-997526 (source=lease)
	I1101 09:41:36.687152   49576 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:41:36.687616   49576 main.go:143] libmachine: unable to find current IP address of domain flannel-997526 in network mk-flannel-997526 (interfaces detected: [])
	I1101 09:41:36.687649   49576 retry.go:31] will retry after 1.917509919s: waiting for domain to come up
	I1101 09:41:38.607735   49576 main.go:143] libmachine: domain flannel-997526 has defined MAC address 52:54:00:01:4f:7e in network mk-flannel-997526
	I1101 09:41:38.797094   49576 main.go:143] libmachine: no network interface addresses found for domain flannel-997526 (source=lease)
	I1101 09:41:38.797122   49576 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:41:38.797676   49576 main.go:143] libmachine: unable to find current IP address of domain flannel-997526 in network mk-flannel-997526 (interfaces detected: [])
	I1101 09:41:38.797726   49576 retry.go:31] will retry after 2.400017363s: waiting for domain to come up
	I1101 09:41:39.521538   50360 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:41:39.521591   50360 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-5912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 09:41:39.521607   50360 cache.go:59] Caching tarball of preloaded images
	I1101 09:41:39.521716   50360 preload.go:233] Found /home/jenkins/minikube-integration/21835-5912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 09:41:39.521732   50360 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:41:39.521901   50360 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/bridge-997526/config.json ...
	I1101 09:41:39.521935   50360 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/bridge-997526/config.json: {Name:mk489122046217ada41fb2efa15555c1bb8f9c31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:41:39.522156   50360 start.go:360] acquireMachinesLock for bridge-997526: {Name:mk8049b4e421873947dfa0bcd96201ccb1e1825c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 09:41:40.697736   48538 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.002372848s
	I1101 09:41:40.700937   48538 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 09:41:40.701100   48538 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.50.176:8443/livez
	I1101 09:41:40.701248   48538 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 09:41:40.701362   48538 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 09:41:43.197057   48538 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.496755697s
	I1101 09:41:43.842668   48538 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.142904521s
	I1101 09:41:41.199113   49576 main.go:143] libmachine: domain flannel-997526 has defined MAC address 52:54:00:01:4f:7e in network mk-flannel-997526
	I1101 09:41:41.199877   49576 main.go:143] libmachine: no network interface addresses found for domain flannel-997526 (source=lease)
	I1101 09:41:41.199896   49576 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:41:41.200456   49576 main.go:143] libmachine: unable to find current IP address of domain flannel-997526 in network mk-flannel-997526 (interfaces detected: [])
	I1101 09:41:41.200492   49576 retry.go:31] will retry after 4.471137594s: waiting for domain to come up
	I1101 09:41:45.700560   48538 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.001455783s
	I1101 09:41:45.714623   48538 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 09:41:45.730886   48538 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 09:41:45.742946   48538 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 09:41:45.743232   48538 kubeadm.go:319] [mark-control-plane] Marking the node enable-default-cni-997526 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 09:41:45.756576   48538 kubeadm.go:319] [bootstrap-token] Using token: vqu2sz.5fppqsyy893zqwds
	I1101 09:41:45.758025   48538 out.go:252]   - Configuring RBAC rules ...
	I1101 09:41:45.758191   48538 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 09:41:45.767556   48538 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 09:41:45.777160   48538 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 09:41:45.781872   48538 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 09:41:45.788128   48538 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 09:41:45.793286   48538 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 09:41:46.107624   48538 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 09:41:46.576836   48538 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 09:41:47.107703   48538 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 09:41:47.109016   48538 kubeadm.go:319] 
	I1101 09:41:47.109110   48538 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 09:41:47.109121   48538 kubeadm.go:319] 
	I1101 09:41:47.109261   48538 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 09:41:47.109278   48538 kubeadm.go:319] 
	I1101 09:41:47.109304   48538 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 09:41:47.109354   48538 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 09:41:47.109437   48538 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 09:41:47.109448   48538 kubeadm.go:319] 
	I1101 09:41:47.109526   48538 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 09:41:47.109536   48538 kubeadm.go:319] 
	I1101 09:41:47.109613   48538 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 09:41:47.109631   48538 kubeadm.go:319] 
	I1101 09:41:47.109717   48538 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 09:41:47.109783   48538 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 09:41:47.109879   48538 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 09:41:47.109895   48538 kubeadm.go:319] 
	I1101 09:41:47.109970   48538 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 09:41:47.110038   48538 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 09:41:47.110044   48538 kubeadm.go:319] 
	I1101 09:41:47.110146   48538 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token vqu2sz.5fppqsyy893zqwds \
	I1101 09:41:47.110301   48538 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a5abe2adb0c939d52fba184971121a4379087a8fcf67d55f536fc49608a1d330 \
	I1101 09:41:47.110337   48538 kubeadm.go:319] 	--control-plane 
	I1101 09:41:47.110346   48538 kubeadm.go:319] 
	I1101 09:41:47.110467   48538 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 09:41:47.110481   48538 kubeadm.go:319] 
	I1101 09:41:47.110620   48538 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token vqu2sz.5fppqsyy893zqwds \
	I1101 09:41:47.110801   48538 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:a5abe2adb0c939d52fba184971121a4379087a8fcf67d55f536fc49608a1d330 
	I1101 09:41:47.112297   48538 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 09:41:47.112337   48538 cni.go:84] Creating CNI manager for "bridge"
	I1101 09:41:47.114226   48538 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 09:41:47.281112   50360 start.go:364] duration metric: took 7.758904679s to acquireMachinesLock for "bridge-997526"
	I1101 09:41:47.281182   50360 start.go:93] Provisioning new machine with config: &{Name:bridge-997526 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:bridge-997526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:41:47.281325   50360 start.go:125] createHost starting for "" (driver="kvm2")
	I1101 09:41:45.672982   49576 main.go:143] libmachine: domain flannel-997526 has defined MAC address 52:54:00:01:4f:7e in network mk-flannel-997526
	I1101 09:41:45.673799   49576 main.go:143] libmachine: domain flannel-997526 has current primary IP address 192.168.83.140 and MAC address 52:54:00:01:4f:7e in network mk-flannel-997526
	I1101 09:41:45.673821   49576 main.go:143] libmachine: found domain IP: 192.168.83.140
	I1101 09:41:45.673829   49576 main.go:143] libmachine: reserving static IP address...
	I1101 09:41:45.674284   49576 main.go:143] libmachine: unable to find host DHCP lease matching {name: "flannel-997526", mac: "52:54:00:01:4f:7e", ip: "192.168.83.140"} in network mk-flannel-997526
	I1101 09:41:45.921283   49576 main.go:143] libmachine: reserved static IP address 192.168.83.140 for domain flannel-997526
	I1101 09:41:45.921311   49576 main.go:143] libmachine: waiting for SSH...
	I1101 09:41:45.921320   49576 main.go:143] libmachine: Getting to WaitForSSH function...
	I1101 09:41:45.924810   49576 main.go:143] libmachine: domain flannel-997526 has defined MAC address 52:54:00:01:4f:7e in network mk-flannel-997526
	I1101 09:41:45.925400   49576 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:01:4f:7e", ip: ""} in network mk-flannel-997526: {Iface:virbr5 ExpiryTime:2025-11-01 10:41:43 +0000 UTC Type:0 Mac:52:54:00:01:4f:7e Iaid: IPaddr:192.168.83.140 Prefix:24 Hostname:minikube Clientid:01:52:54:00:01:4f:7e}
	I1101 09:41:45.925445   49576 main.go:143] libmachine: domain flannel-997526 has defined IP address 192.168.83.140 and MAC address 52:54:00:01:4f:7e in network mk-flannel-997526
	I1101 09:41:45.925670   49576 main.go:143] libmachine: Using SSH client type: native
	I1101 09:41:45.925980   49576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.83.140 22 <nil> <nil>}
	I1101 09:41:45.925998   49576 main.go:143] libmachine: About to run SSH command:
	exit 0
	I1101 09:41:46.034483   49576 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:41:46.034901   49576 main.go:143] libmachine: domain creation complete
	I1101 09:41:46.036733   49576 machine.go:94] provisionDockerMachine start ...
	I1101 09:41:46.039488   49576 main.go:143] libmachine: domain flannel-997526 has defined MAC address 52:54:00:01:4f:7e in network mk-flannel-997526
	I1101 09:41:46.039952   49576 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:01:4f:7e", ip: ""} in network mk-flannel-997526: {Iface:virbr5 ExpiryTime:2025-11-01 10:41:43 +0000 UTC Type:0 Mac:52:54:00:01:4f:7e Iaid: IPaddr:192.168.83.140 Prefix:24 Hostname:flannel-997526 Clientid:01:52:54:00:01:4f:7e}
	I1101 09:41:46.039984   49576 main.go:143] libmachine: domain flannel-997526 has defined IP address 192.168.83.140 and MAC address 52:54:00:01:4f:7e in network mk-flannel-997526
	I1101 09:41:46.040167   49576 main.go:143] libmachine: Using SSH client type: native
	I1101 09:41:46.040404   49576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.83.140 22 <nil> <nil>}
	I1101 09:41:46.040417   49576 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:41:46.154268   49576 main.go:143] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1101 09:41:46.154296   49576 buildroot.go:166] provisioning hostname "flannel-997526"
	I1101 09:41:46.157967   49576 main.go:143] libmachine: domain flannel-997526 has defined MAC address 52:54:00:01:4f:7e in network mk-flannel-997526
	I1101 09:41:46.158461   49576 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:01:4f:7e", ip: ""} in network mk-flannel-997526: {Iface:virbr5 ExpiryTime:2025-11-01 10:41:43 +0000 UTC Type:0 Mac:52:54:00:01:4f:7e Iaid: IPaddr:192.168.83.140 Prefix:24 Hostname:flannel-997526 Clientid:01:52:54:00:01:4f:7e}
	I1101 09:41:46.158487   49576 main.go:143] libmachine: domain flannel-997526 has defined IP address 192.168.83.140 and MAC address 52:54:00:01:4f:7e in network mk-flannel-997526
	I1101 09:41:46.158648   49576 main.go:143] libmachine: Using SSH client type: native
	I1101 09:41:46.158862   49576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.83.140 22 <nil> <nil>}
	I1101 09:41:46.158884   49576 main.go:143] libmachine: About to run SSH command:
	sudo hostname flannel-997526 && echo "flannel-997526" | sudo tee /etc/hostname
	I1101 09:41:46.292098   49576 main.go:143] libmachine: SSH cmd err, output: <nil>: flannel-997526
	
	I1101 09:41:46.295558   49576 main.go:143] libmachine: domain flannel-997526 has defined MAC address 52:54:00:01:4f:7e in network mk-flannel-997526
	I1101 09:41:46.295955   49576 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:01:4f:7e", ip: ""} in network mk-flannel-997526: {Iface:virbr5 ExpiryTime:2025-11-01 10:41:43 +0000 UTC Type:0 Mac:52:54:00:01:4f:7e Iaid: IPaddr:192.168.83.140 Prefix:24 Hostname:flannel-997526 Clientid:01:52:54:00:01:4f:7e}
	I1101 09:41:46.295985   49576 main.go:143] libmachine: domain flannel-997526 has defined IP address 192.168.83.140 and MAC address 52:54:00:01:4f:7e in network mk-flannel-997526
	I1101 09:41:46.296147   49576 main.go:143] libmachine: Using SSH client type: native
	I1101 09:41:46.296411   49576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.83.140 22 <nil> <nil>}
	I1101 09:41:46.296435   49576 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-997526' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-997526/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-997526' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:41:46.418029   49576 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:41:46.418059   49576 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21835-5912/.minikube CaCertPath:/home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21835-5912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21835-5912/.minikube}
	I1101 09:41:46.418092   49576 buildroot.go:174] setting up certificates
	I1101 09:41:46.418101   49576 provision.go:84] configureAuth start
	I1101 09:41:46.421958   49576 main.go:143] libmachine: domain flannel-997526 has defined MAC address 52:54:00:01:4f:7e in network mk-flannel-997526
	I1101 09:41:46.422483   49576 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:01:4f:7e", ip: ""} in network mk-flannel-997526: {Iface:virbr5 ExpiryTime:2025-11-01 10:41:43 +0000 UTC Type:0 Mac:52:54:00:01:4f:7e Iaid: IPaddr:192.168.83.140 Prefix:24 Hostname:flannel-997526 Clientid:01:52:54:00:01:4f:7e}
	I1101 09:41:46.422521   49576 main.go:143] libmachine: domain flannel-997526 has defined IP address 192.168.83.140 and MAC address 52:54:00:01:4f:7e in network mk-flannel-997526
	I1101 09:41:46.425695   49576 main.go:143] libmachine: domain flannel-997526 has defined MAC address 52:54:00:01:4f:7e in network mk-flannel-997526
	I1101 09:41:46.426188   49576 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:01:4f:7e", ip: ""} in network mk-flannel-997526: {Iface:virbr5 ExpiryTime:2025-11-01 10:41:43 +0000 UTC Type:0 Mac:52:54:00:01:4f:7e Iaid: IPaddr:192.168.83.140 Prefix:24 Hostname:flannel-997526 Clientid:01:52:54:00:01:4f:7e}
	I1101 09:41:46.426236   49576 main.go:143] libmachine: domain flannel-997526 has defined IP address 192.168.83.140 and MAC address 52:54:00:01:4f:7e in network mk-flannel-997526
	I1101 09:41:46.426409   49576 provision.go:143] copyHostCerts
	I1101 09:41:46.426486   49576 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5912/.minikube/ca.pem, removing ...
	I1101 09:41:46.426507   49576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5912/.minikube/ca.pem
	I1101 09:41:46.426595   49576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21835-5912/.minikube/ca.pem (1082 bytes)
	I1101 09:41:46.426734   49576 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5912/.minikube/cert.pem, removing ...
	I1101 09:41:46.426749   49576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5912/.minikube/cert.pem
	I1101 09:41:46.426794   49576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21835-5912/.minikube/cert.pem (1123 bytes)
	I1101 09:41:46.426901   49576 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5912/.minikube/key.pem, removing ...
	I1101 09:41:46.426912   49576 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5912/.minikube/key.pem
	I1101 09:41:46.426955   49576 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21835-5912/.minikube/key.pem (1679 bytes)
	I1101 09:41:46.427036   49576 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21835-5912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca-key.pem org=jenkins.flannel-997526 san=[127.0.0.1 192.168.83.140 flannel-997526 localhost minikube]
	I1101 09:41:46.572238   49576 provision.go:177] copyRemoteCerts
	I1101 09:41:46.572294   49576 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:41:46.575766   49576 main.go:143] libmachine: domain flannel-997526 has defined MAC address 52:54:00:01:4f:7e in network mk-flannel-997526
	I1101 09:41:46.576244   49576 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:01:4f:7e", ip: ""} in network mk-flannel-997526: {Iface:virbr5 ExpiryTime:2025-11-01 10:41:43 +0000 UTC Type:0 Mac:52:54:00:01:4f:7e Iaid: IPaddr:192.168.83.140 Prefix:24 Hostname:flannel-997526 Clientid:01:52:54:00:01:4f:7e}
	I1101 09:41:46.576271   49576 main.go:143] libmachine: domain flannel-997526 has defined IP address 192.168.83.140 and MAC address 52:54:00:01:4f:7e in network mk-flannel-997526
	I1101 09:41:46.576425   49576 sshutil.go:53] new ssh client: &{IP:192.168.83.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/flannel-997526/id_rsa Username:docker}
	I1101 09:41:46.663634   49576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 09:41:46.698441   49576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1101 09:41:46.732494   49576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:41:46.763869   49576 provision.go:87] duration metric: took 345.750178ms to configureAuth
	I1101 09:41:46.763906   49576 buildroot.go:189] setting minikube options for container-runtime
	I1101 09:41:46.764072   49576 config.go:182] Loaded profile config "flannel-997526": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:41:46.767046   49576 main.go:143] libmachine: domain flannel-997526 has defined MAC address 52:54:00:01:4f:7e in network mk-flannel-997526
	I1101 09:41:46.767470   49576 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:01:4f:7e", ip: ""} in network mk-flannel-997526: {Iface:virbr5 ExpiryTime:2025-11-01 10:41:43 +0000 UTC Type:0 Mac:52:54:00:01:4f:7e Iaid: IPaddr:192.168.83.140 Prefix:24 Hostname:flannel-997526 Clientid:01:52:54:00:01:4f:7e}
	I1101 09:41:46.767500   49576 main.go:143] libmachine: domain flannel-997526 has defined IP address 192.168.83.140 and MAC address 52:54:00:01:4f:7e in network mk-flannel-997526
	I1101 09:41:46.767685   49576 main.go:143] libmachine: Using SSH client type: native
	I1101 09:41:46.767903   49576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.83.140 22 <nil> <nil>}
	I1101 09:41:46.767924   49576 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:41:47.012221   49576 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:41:47.012251   49576 machine.go:97] duration metric: took 975.498374ms to provisionDockerMachine
	I1101 09:41:47.012263   49576 client.go:176] duration metric: took 20.896749084s to LocalClient.Create
	I1101 09:41:47.012277   49576 start.go:167] duration metric: took 20.896811704s to libmachine.API.Create "flannel-997526"
	I1101 09:41:47.012286   49576 start.go:293] postStartSetup for "flannel-997526" (driver="kvm2")
	I1101 09:41:47.012298   49576 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:41:47.012363   49576 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:41:47.015642   49576 main.go:143] libmachine: domain flannel-997526 has defined MAC address 52:54:00:01:4f:7e in network mk-flannel-997526
	I1101 09:41:47.016237   49576 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:01:4f:7e", ip: ""} in network mk-flannel-997526: {Iface:virbr5 ExpiryTime:2025-11-01 10:41:43 +0000 UTC Type:0 Mac:52:54:00:01:4f:7e Iaid: IPaddr:192.168.83.140 Prefix:24 Hostname:flannel-997526 Clientid:01:52:54:00:01:4f:7e}
	I1101 09:41:47.016265   49576 main.go:143] libmachine: domain flannel-997526 has defined IP address 192.168.83.140 and MAC address 52:54:00:01:4f:7e in network mk-flannel-997526
	I1101 09:41:47.016413   49576 sshutil.go:53] new ssh client: &{IP:192.168.83.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/flannel-997526/id_rsa Username:docker}
	I1101 09:41:47.101863   49576 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:41:47.106740   49576 info.go:137] Remote host: Buildroot 2025.02
	I1101 09:41:47.106779   49576 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5912/.minikube/addons for local assets ...
	I1101 09:41:47.106864   49576 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5912/.minikube/files for local assets ...
	I1101 09:41:47.106990   49576 filesync.go:149] local asset: /home/jenkins/minikube-integration/21835-5912/.minikube/files/etc/ssl/certs/97932.pem -> 97932.pem in /etc/ssl/certs
	I1101 09:41:47.107135   49576 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:41:47.120558   49576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/files/etc/ssl/certs/97932.pem --> /etc/ssl/certs/97932.pem (1708 bytes)
	I1101 09:41:47.157018   49576 start.go:296] duration metric: took 144.716497ms for postStartSetup
	I1101 09:41:47.160548   49576 main.go:143] libmachine: domain flannel-997526 has defined MAC address 52:54:00:01:4f:7e in network mk-flannel-997526
	I1101 09:41:47.160989   49576 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:01:4f:7e", ip: ""} in network mk-flannel-997526: {Iface:virbr5 ExpiryTime:2025-11-01 10:41:43 +0000 UTC Type:0 Mac:52:54:00:01:4f:7e Iaid: IPaddr:192.168.83.140 Prefix:24 Hostname:flannel-997526 Clientid:01:52:54:00:01:4f:7e}
	I1101 09:41:47.161019   49576 main.go:143] libmachine: domain flannel-997526 has defined IP address 192.168.83.140 and MAC address 52:54:00:01:4f:7e in network mk-flannel-997526
	I1101 09:41:47.161252   49576 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/flannel-997526/config.json ...
	I1101 09:41:47.161465   49576 start.go:128] duration metric: took 21.048683775s to createHost
	I1101 09:41:47.163626   49576 main.go:143] libmachine: domain flannel-997526 has defined MAC address 52:54:00:01:4f:7e in network mk-flannel-997526
	I1101 09:41:47.164089   49576 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:01:4f:7e", ip: ""} in network mk-flannel-997526: {Iface:virbr5 ExpiryTime:2025-11-01 10:41:43 +0000 UTC Type:0 Mac:52:54:00:01:4f:7e Iaid: IPaddr:192.168.83.140 Prefix:24 Hostname:flannel-997526 Clientid:01:52:54:00:01:4f:7e}
	I1101 09:41:47.164118   49576 main.go:143] libmachine: domain flannel-997526 has defined IP address 192.168.83.140 and MAC address 52:54:00:01:4f:7e in network mk-flannel-997526
	I1101 09:41:47.164369   49576 main.go:143] libmachine: Using SSH client type: native
	I1101 09:41:47.164589   49576 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.83.140 22 <nil> <nil>}
	I1101 09:41:47.164607   49576 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1101 09:41:47.280929   49576 main.go:143] libmachine: SSH cmd err, output: <nil>: 1761990107.244072923
	
	I1101 09:41:47.280954   49576 fix.go:216] guest clock: 1761990107.244072923
	I1101 09:41:47.280965   49576 fix.go:229] Guest: 2025-11-01 09:41:47.244072923 +0000 UTC Remote: 2025-11-01 09:41:47.161479587 +0000 UTC m=+22.040948818 (delta=82.593336ms)
	I1101 09:41:47.280983   49576 fix.go:200] guest clock delta is within tolerance: 82.593336ms
	I1101 09:41:47.280991   49576 start.go:83] releasing machines lock for "flannel-997526", held for 21.168365504s
	I1101 09:41:47.284719   49576 main.go:143] libmachine: domain flannel-997526 has defined MAC address 52:54:00:01:4f:7e in network mk-flannel-997526
	I1101 09:41:47.285296   49576 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:01:4f:7e", ip: ""} in network mk-flannel-997526: {Iface:virbr5 ExpiryTime:2025-11-01 10:41:43 +0000 UTC Type:0 Mac:52:54:00:01:4f:7e Iaid: IPaddr:192.168.83.140 Prefix:24 Hostname:flannel-997526 Clientid:01:52:54:00:01:4f:7e}
	I1101 09:41:47.285329   49576 main.go:143] libmachine: domain flannel-997526 has defined IP address 192.168.83.140 and MAC address 52:54:00:01:4f:7e in network mk-flannel-997526
	I1101 09:41:47.286034   49576 ssh_runner.go:195] Run: cat /version.json
	I1101 09:41:47.286109   49576 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:41:47.290130   49576 main.go:143] libmachine: domain flannel-997526 has defined MAC address 52:54:00:01:4f:7e in network mk-flannel-997526
	I1101 09:41:47.290260   49576 main.go:143] libmachine: domain flannel-997526 has defined MAC address 52:54:00:01:4f:7e in network mk-flannel-997526
	I1101 09:41:47.290587   49576 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:01:4f:7e", ip: ""} in network mk-flannel-997526: {Iface:virbr5 ExpiryTime:2025-11-01 10:41:43 +0000 UTC Type:0 Mac:52:54:00:01:4f:7e Iaid: IPaddr:192.168.83.140 Prefix:24 Hostname:flannel-997526 Clientid:01:52:54:00:01:4f:7e}
	I1101 09:41:47.290617   49576 main.go:143] libmachine: domain flannel-997526 has defined IP address 192.168.83.140 and MAC address 52:54:00:01:4f:7e in network mk-flannel-997526
	I1101 09:41:47.290753   49576 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:01:4f:7e", ip: ""} in network mk-flannel-997526: {Iface:virbr5 ExpiryTime:2025-11-01 10:41:43 +0000 UTC Type:0 Mac:52:54:00:01:4f:7e Iaid: IPaddr:192.168.83.140 Prefix:24 Hostname:flannel-997526 Clientid:01:52:54:00:01:4f:7e}
	I1101 09:41:47.290789   49576 main.go:143] libmachine: domain flannel-997526 has defined IP address 192.168.83.140 and MAC address 52:54:00:01:4f:7e in network mk-flannel-997526
	I1101 09:41:47.290984   49576 sshutil.go:53] new ssh client: &{IP:192.168.83.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/flannel-997526/id_rsa Username:docker}
	I1101 09:41:47.291036   49576 sshutil.go:53] new ssh client: &{IP:192.168.83.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/flannel-997526/id_rsa Username:docker}
	I1101 09:41:47.376690   49576 ssh_runner.go:195] Run: systemctl --version
	I1101 09:41:47.405284   49576 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:41:47.579311   49576 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:41:47.587701   49576 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:41:47.587785   49576 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:41:47.613939   49576 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 09:41:47.613968   49576 start.go:496] detecting cgroup driver to use...
	I1101 09:41:47.614049   49576 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:41:47.639100   49576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:41:47.662739   49576 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:41:47.662804   49576 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:41:47.685096   49576 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:41:47.703953   49576 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:41:47.865563   49576 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:41:48.090875   49576 docker.go:234] disabling docker service ...
	I1101 09:41:48.090944   49576 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:41:48.109779   49576 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:41:48.129463   49576 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:41:48.289071   49576 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:41:48.446463   49576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:41:48.462301   49576 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:41:48.486681   49576 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:41:48.486751   49576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:41:48.500829   49576 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 09:41:48.500893   49576 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:41:48.514498   49576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:41:48.532465   49576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:41:48.550030   49576 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:41:48.564437   49576 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:41:48.577309   49576 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:41:48.597731   49576 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:41:48.609620   49576 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:41:48.619985   49576 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1101 09:41:48.620040   49576 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1101 09:41:48.641234   49576 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:41:48.652803   49576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:41:48.811113   49576 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:41:48.959661   49576 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:41:48.959728   49576 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:41:48.968527   49576 start.go:564] Will wait 60s for crictl version
	I1101 09:41:48.968616   49576 ssh_runner.go:195] Run: which crictl
	I1101 09:41:48.974946   49576 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 09:41:49.017261   49576 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1101 09:41:49.017354   49576 ssh_runner.go:195] Run: crio --version
	I1101 09:41:49.062307   49576 ssh_runner.go:195] Run: crio --version
	I1101 09:41:49.103905   49576 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1101 09:41:47.283286   50360 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1101 09:41:47.283517   50360 start.go:159] libmachine.API.Create for "bridge-997526" (driver="kvm2")
	I1101 09:41:47.283559   50360 client.go:173] LocalClient.Create starting
	I1101 09:41:47.283656   50360 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca.pem
	I1101 09:41:47.283729   50360 main.go:143] libmachine: Decoding PEM data...
	I1101 09:41:47.283753   50360 main.go:143] libmachine: Parsing certificate...
	I1101 09:41:47.283811   50360 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21835-5912/.minikube/certs/cert.pem
	I1101 09:41:47.283847   50360 main.go:143] libmachine: Decoding PEM data...
	I1101 09:41:47.283859   50360 main.go:143] libmachine: Parsing certificate...
	I1101 09:41:47.284314   50360 main.go:143] libmachine: creating domain...
	I1101 09:41:47.284329   50360 main.go:143] libmachine: creating network...
	I1101 09:41:47.285922   50360 main.go:143] libmachine: found existing default network
	I1101 09:41:47.286240   50360 main.go:143] libmachine: <network connections='4'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 09:41:47.287660   50360 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:98:e9:70} reservation:<nil>}
	I1101 09:41:47.288753   50360 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:fa:0f:ff} reservation:<nil>}
	I1101 09:41:47.289615   50360 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:de:c4:3e} reservation:<nil>}
	I1101 09:41:47.290889   50360 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001dc2e10}
	I1101 09:41:47.290986   50360 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-bridge-997526</name>
	  <dns enable='no'/>
	  <ip address='192.168.72.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.72.2' end='192.168.72.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 09:41:47.301069   50360 main.go:143] libmachine: creating private network mk-bridge-997526 192.168.72.0/24...
	I1101 09:41:47.395615   50360 main.go:143] libmachine: private network mk-bridge-997526 192.168.72.0/24 created
	I1101 09:41:47.396020   50360 main.go:143] libmachine: <network>
	  <name>mk-bridge-997526</name>
	  <uuid>1b141f8c-aed7-4d20-839c-1aba131804fe</uuid>
	  <bridge name='virbr4' stp='on' delay='0'/>
	  <mac address='52:54:00:69:aa:34'/>
	  <dns enable='no'/>
	  <ip address='192.168.72.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.72.2' end='192.168.72.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 09:41:47.396053   50360 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21835-5912/.minikube/machines/bridge-997526 ...
	I1101 09:41:47.396087   50360 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21835-5912/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso
	I1101 09:41:47.396100   50360 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21835-5912/.minikube
	I1101 09:41:47.396180   50360 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21835-5912/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21835-5912/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso...
	I1101 09:41:47.660331   50360 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21835-5912/.minikube/machines/bridge-997526/id_rsa...
	I1101 09:41:47.810426   50360 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21835-5912/.minikube/machines/bridge-997526/bridge-997526.rawdisk...
	I1101 09:41:47.810470   50360 main.go:143] libmachine: Writing magic tar header
	I1101 09:41:47.810487   50360 main.go:143] libmachine: Writing SSH key tar header
	I1101 09:41:47.810564   50360 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21835-5912/.minikube/machines/bridge-997526 ...
	I1101 09:41:47.810631   50360 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21835-5912/.minikube/machines/bridge-997526
	I1101 09:41:47.810651   50360 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21835-5912/.minikube/machines/bridge-997526 (perms=drwx------)
	I1101 09:41:47.810664   50360 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21835-5912/.minikube/machines
	I1101 09:41:47.810673   50360 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21835-5912/.minikube/machines (perms=drwxr-xr-x)
	I1101 09:41:47.810686   50360 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21835-5912/.minikube
	I1101 09:41:47.810697   50360 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21835-5912/.minikube (perms=drwxr-xr-x)
	I1101 09:41:47.810705   50360 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21835-5912
	I1101 09:41:47.810713   50360 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21835-5912 (perms=drwxrwxr-x)
	I1101 09:41:47.810722   50360 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1101 09:41:47.810730   50360 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1101 09:41:47.810737   50360 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1101 09:41:47.810747   50360 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1101 09:41:47.810758   50360 main.go:143] libmachine: checking permissions on dir: /home
	I1101 09:41:47.810778   50360 main.go:143] libmachine: skipping /home - not owner
	I1101 09:41:47.810788   50360 main.go:143] libmachine: defining domain...
	I1101 09:41:47.812058   50360 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>bridge-997526</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21835-5912/.minikube/machines/bridge-997526/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21835-5912/.minikube/machines/bridge-997526/bridge-997526.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-bridge-997526'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1101 09:41:47.817135   50360 main.go:143] libmachine: domain bridge-997526 has defined MAC address 52:54:00:e6:58:6b in network default
	I1101 09:41:47.817924   50360 main.go:143] libmachine: domain bridge-997526 has defined MAC address 52:54:00:c1:da:2f in network mk-bridge-997526
	I1101 09:41:47.817955   50360 main.go:143] libmachine: starting domain...
	I1101 09:41:47.817962   50360 main.go:143] libmachine: ensuring networks are active...
	I1101 09:41:47.819102   50360 main.go:143] libmachine: Ensuring network default is active
	I1101 09:41:47.819672   50360 main.go:143] libmachine: Ensuring network mk-bridge-997526 is active
	I1101 09:41:47.820526   50360 main.go:143] libmachine: getting domain XML...
	I1101 09:41:47.821971   50360 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>bridge-997526</name>
	  <uuid>b015fa84-851c-44ea-a369-2744d086f5f0</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21835-5912/.minikube/machines/bridge-997526/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21835-5912/.minikube/machines/bridge-997526/bridge-997526.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:c1:da:2f'/>
	      <source network='mk-bridge-997526'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:e6:58:6b'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1101 09:41:49.283106   50360 main.go:143] libmachine: waiting for domain to start...
	I1101 09:41:49.284512   50360 main.go:143] libmachine: domain is now running
	I1101 09:41:49.284528   50360 main.go:143] libmachine: waiting for IP...
	I1101 09:41:49.285248   50360 main.go:143] libmachine: domain bridge-997526 has defined MAC address 52:54:00:c1:da:2f in network mk-bridge-997526
	I1101 09:41:49.285782   50360 main.go:143] libmachine: no network interface addresses found for domain bridge-997526 (source=lease)
	I1101 09:41:49.285795   50360 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:41:49.286334   50360 main.go:143] libmachine: unable to find current IP address of domain bridge-997526 in network mk-bridge-997526 (interfaces detected: [])
	I1101 09:41:49.286396   50360 retry.go:31] will retry after 229.886386ms: waiting for domain to come up
	I1101 09:41:47.115516   48538 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 09:41:47.144477   48538 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1101 09:41:47.174668   48538 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 09:41:47.174777   48538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:41:47.174836   48538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes enable-default-cni-997526 minikube.k8s.io/updated_at=2025_11_01T09_41_47_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192 minikube.k8s.io/name=enable-default-cni-997526 minikube.k8s.io/primary=true
	I1101 09:41:47.216895   48538 ops.go:34] apiserver oom_adj: -16
	I1101 09:41:47.316474   48538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:41:47.816883   48538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:41:48.317444   48538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:41:48.817302   48538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:41:49.317407   48538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:41:49.816665   48538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:41:49.108575   49576 main.go:143] libmachine: domain flannel-997526 has defined MAC address 52:54:00:01:4f:7e in network mk-flannel-997526
	I1101 09:41:49.109153   49576 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:01:4f:7e", ip: ""} in network mk-flannel-997526: {Iface:virbr5 ExpiryTime:2025-11-01 10:41:43 +0000 UTC Type:0 Mac:52:54:00:01:4f:7e Iaid: IPaddr:192.168.83.140 Prefix:24 Hostname:flannel-997526 Clientid:01:52:54:00:01:4f:7e}
	I1101 09:41:49.109177   49576 main.go:143] libmachine: domain flannel-997526 has defined IP address 192.168.83.140 and MAC address 52:54:00:01:4f:7e in network mk-flannel-997526
	I1101 09:41:49.109375   49576 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I1101 09:41:49.114462   49576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:41:49.131497   49576 kubeadm.go:884] updating cluster {Name:flannel-997526 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34
.1 ClusterName:flannel-997526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.83.140 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:41:49.131630   49576 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:41:49.131698   49576 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:41:49.175463   49576 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1101 09:41:49.175552   49576 ssh_runner.go:195] Run: which lz4
	I1101 09:41:49.180320   49576 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1101 09:41:49.185316   49576 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1101 09:41:49.185353   49576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (409477533 bytes)
	I1101 09:41:50.317114   48538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:41:50.817405   48538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:41:51.317470   48538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:41:51.816984   48538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:41:51.996138   48538 kubeadm.go:1114] duration metric: took 4.821423903s to wait for elevateKubeSystemPrivileges
	I1101 09:41:51.996175   48538 kubeadm.go:403] duration metric: took 17.748052232s to StartCluster
	I1101 09:41:51.996198   48538 settings.go:142] acquiring lock: {Name:mk818d33e162ca33774e3ab05f6aac30f8feaf64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:41:51.996306   48538 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21835-5912/kubeconfig
	I1101 09:41:51.997656   48538 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5912/kubeconfig: {Name:mk599bec02e6b7062c3926243176124a4bc71dfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:41:51.997968   48538 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.50.176 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:41:51.998135   48538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 09:41:51.998143   48538 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:41:51.998246   48538 addons.go:70] Setting storage-provisioner=true in profile "enable-default-cni-997526"
	I1101 09:41:51.998264   48538 addons.go:239] Setting addon storage-provisioner=true in "enable-default-cni-997526"
	I1101 09:41:51.998298   48538 host.go:66] Checking if "enable-default-cni-997526" exists ...
	I1101 09:41:51.998385   48538 config.go:182] Loaded profile config "enable-default-cni-997526": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:41:51.998441   48538 addons.go:70] Setting default-storageclass=true in profile "enable-default-cni-997526"
	I1101 09:41:51.998463   48538 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "enable-default-cni-997526"
	I1101 09:41:52.000084   48538 out.go:179] * Verifying Kubernetes components...
	I1101 09:41:52.001495   48538 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:41:52.001526   48538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:41:52.002764   48538 addons.go:239] Setting addon default-storageclass=true in "enable-default-cni-997526"
	I1101 09:41:52.002808   48538 host.go:66] Checking if "enable-default-cni-997526" exists ...
	I1101 09:41:52.002876   48538 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:41:52.002900   48538 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:41:52.005237   48538 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:41:52.005255   48538 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:41:52.007268   48538 main.go:143] libmachine: domain enable-default-cni-997526 has defined MAC address 52:54:00:a5:e0:2f in network mk-enable-default-cni-997526
	I1101 09:41:52.008043   48538 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a5:e0:2f", ip: ""} in network mk-enable-default-cni-997526: {Iface:virbr2 ExpiryTime:2025-11-01 10:41:21 +0000 UTC Type:0 Mac:52:54:00:a5:e0:2f Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:enable-default-cni-997526 Clientid:01:52:54:00:a5:e0:2f}
	I1101 09:41:52.008086   48538 main.go:143] libmachine: domain enable-default-cni-997526 has defined IP address 192.168.50.176 and MAC address 52:54:00:a5:e0:2f in network mk-enable-default-cni-997526
	I1101 09:41:52.008540   48538 sshutil.go:53] new ssh client: &{IP:192.168.50.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/enable-default-cni-997526/id_rsa Username:docker}
	I1101 09:41:52.009638   48538 main.go:143] libmachine: domain enable-default-cni-997526 has defined MAC address 52:54:00:a5:e0:2f in network mk-enable-default-cni-997526
	I1101 09:41:52.010168   48538 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:a5:e0:2f", ip: ""} in network mk-enable-default-cni-997526: {Iface:virbr2 ExpiryTime:2025-11-01 10:41:21 +0000 UTC Type:0 Mac:52:54:00:a5:e0:2f Iaid: IPaddr:192.168.50.176 Prefix:24 Hostname:enable-default-cni-997526 Clientid:01:52:54:00:a5:e0:2f}
	I1101 09:41:52.010203   48538 main.go:143] libmachine: domain enable-default-cni-997526 has defined IP address 192.168.50.176 and MAC address 52:54:00:a5:e0:2f in network mk-enable-default-cni-997526
	I1101 09:41:52.010451   48538 sshutil.go:53] new ssh client: &{IP:192.168.50.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/enable-default-cni-997526/id_rsa Username:docker}
	I1101 09:41:52.372428   48538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 09:41:52.530745   48538 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:41:52.762757   48538 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:41:53.209721   48538 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:41:53.501314   48538 node_ready.go:35] waiting up to 15m0s for node "enable-default-cni-997526" to be "Ready" ...
	I1101 09:41:53.505324   48538 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.132861257s)
	I1101 09:41:53.505351   48538 start.go:977] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1101 09:41:53.550785   48538 node_ready.go:49] node "enable-default-cni-997526" is "Ready"
	I1101 09:41:53.550830   48538 node_ready.go:38] duration metric: took 49.465942ms for node "enable-default-cni-997526" to be "Ready" ...
	I1101 09:41:53.550850   48538 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:41:53.550939   48538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:41:54.010457   48538 kapi.go:214] "coredns" deployment in "kube-system" namespace and "enable-default-cni-997526" context rescaled to 1 replicas
	I1101 09:41:54.049760   48538 api_server.go:72] duration metric: took 2.051747738s to wait for apiserver process to appear ...
	I1101 09:41:54.049790   48538 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:41:54.049810   48538 api_server.go:253] Checking apiserver healthz at https://192.168.50.176:8443/healthz ...
	I1101 09:41:54.050699   48538 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.28790696s)
	I1101 09:41:54.078525   48538 api_server.go:279] https://192.168.50.176:8443/healthz returned 200:
	ok
	I1101 09:41:54.081744   48538 api_server.go:141] control plane version: v1.34.1
	I1101 09:41:54.081774   48538 api_server.go:131] duration metric: took 31.976847ms to wait for apiserver health ...
	I1101 09:41:54.081785   48538 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:41:54.087565   48538 system_pods.go:59] 8 kube-system pods found
	I1101 09:41:54.087612   48538 system_pods.go:61] "coredns-66bc5c9577-wmgmm" [86573251-4122-4419-ab53-18cf682be4fc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:41:54.087627   48538 system_pods.go:61] "coredns-66bc5c9577-z4bgx" [4f9168ff-9ee2-491d-ae58-a0d5b6fa9dd7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:41:54.087638   48538 system_pods.go:61] "etcd-enable-default-cni-997526" [33883119-114d-465b-80d3-6dc12e47b89e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:41:54.087654   48538 system_pods.go:61] "kube-apiserver-enable-default-cni-997526" [3cc573e2-ea6e-43da-8edb-7a3dcfeeb669] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:41:54.087661   48538 system_pods.go:61] "kube-controller-manager-enable-default-cni-997526" [8780c860-fcfb-48f2-8949-c25978830ff3] Running
	I1101 09:41:54.087671   48538 system_pods.go:61] "kube-proxy-2w945" [47f44d3f-bc82-4fb9-8d88-9857adbd1b14] Running
	I1101 09:41:54.087677   48538 system_pods.go:61] "kube-scheduler-enable-default-cni-997526" [d9ab1dd2-1c77-46ec-b094-69b953f0b34f] Running
	I1101 09:41:54.087697   48538 system_pods.go:61] "storage-provisioner" [7555fdeb-a618-4450-afa5-06588e13f947] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:41:54.087709   48538 system_pods.go:74] duration metric: took 5.916669ms to wait for pod list to return data ...
	I1101 09:41:54.087719   48538 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:41:54.090853   48538 default_sa.go:45] found service account: "default"
	I1101 09:41:54.090877   48538 default_sa.go:55] duration metric: took 3.152255ms for default service account to be created ...
	I1101 09:41:54.090891   48538 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:41:54.092136   48538 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 09:41:49.518577   50360 main.go:143] libmachine: domain bridge-997526 has defined MAC address 52:54:00:c1:da:2f in network mk-bridge-997526
	I1101 09:41:49.519575   50360 main.go:143] libmachine: no network interface addresses found for domain bridge-997526 (source=lease)
	I1101 09:41:49.519600   50360 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:41:49.520098   50360 main.go:143] libmachine: unable to find current IP address of domain bridge-997526 in network mk-bridge-997526 (interfaces detected: [])
	I1101 09:41:49.520141   50360 retry.go:31] will retry after 362.73688ms: waiting for domain to come up
	I1101 09:41:49.885037   50360 main.go:143] libmachine: domain bridge-997526 has defined MAC address 52:54:00:c1:da:2f in network mk-bridge-997526
	I1101 09:41:49.885909   50360 main.go:143] libmachine: no network interface addresses found for domain bridge-997526 (source=lease)
	I1101 09:41:49.885930   50360 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:41:49.886372   50360 main.go:143] libmachine: unable to find current IP address of domain bridge-997526 in network mk-bridge-997526 (interfaces detected: [])
	I1101 09:41:49.886413   50360 retry.go:31] will retry after 407.560491ms: waiting for domain to come up
	I1101 09:41:50.296114   50360 main.go:143] libmachine: domain bridge-997526 has defined MAC address 52:54:00:c1:da:2f in network mk-bridge-997526
	I1101 09:41:50.296846   50360 main.go:143] libmachine: no network interface addresses found for domain bridge-997526 (source=lease)
	I1101 09:41:50.296864   50360 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:41:50.297329   50360 main.go:143] libmachine: unable to find current IP address of domain bridge-997526 in network mk-bridge-997526 (interfaces detected: [])
	I1101 09:41:50.297373   50360 retry.go:31] will retry after 474.301727ms: waiting for domain to come up
	I1101 09:41:50.774059   50360 main.go:143] libmachine: domain bridge-997526 has defined MAC address 52:54:00:c1:da:2f in network mk-bridge-997526
	I1101 09:41:50.774956   50360 main.go:143] libmachine: no network interface addresses found for domain bridge-997526 (source=lease)
	I1101 09:41:50.774972   50360 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:41:50.775449   50360 main.go:143] libmachine: unable to find current IP address of domain bridge-997526 in network mk-bridge-997526 (interfaces detected: [])
	I1101 09:41:50.775523   50360 retry.go:31] will retry after 579.356979ms: waiting for domain to come up
	I1101 09:41:51.356130   50360 main.go:143] libmachine: domain bridge-997526 has defined MAC address 52:54:00:c1:da:2f in network mk-bridge-997526
	I1101 09:41:51.357100   50360 main.go:143] libmachine: no network interface addresses found for domain bridge-997526 (source=lease)
	I1101 09:41:51.357127   50360 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:41:51.357681   50360 main.go:143] libmachine: unable to find current IP address of domain bridge-997526 in network mk-bridge-997526 (interfaces detected: [])
	I1101 09:41:51.357726   50360 retry.go:31] will retry after 831.344468ms: waiting for domain to come up
	I1101 09:41:52.192849   50360 main.go:143] libmachine: domain bridge-997526 has defined MAC address 52:54:00:c1:da:2f in network mk-bridge-997526
	I1101 09:41:52.193663   50360 main.go:143] libmachine: no network interface addresses found for domain bridge-997526 (source=lease)
	I1101 09:41:52.193686   50360 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:41:52.194291   50360 main.go:143] libmachine: unable to find current IP address of domain bridge-997526 in network mk-bridge-997526 (interfaces detected: [])
	I1101 09:41:52.194340   50360 retry.go:31] will retry after 869.728584ms: waiting for domain to come up
	I1101 09:41:53.065679   50360 main.go:143] libmachine: domain bridge-997526 has defined MAC address 52:54:00:c1:da:2f in network mk-bridge-997526
	I1101 09:41:53.066767   50360 main.go:143] libmachine: no network interface addresses found for domain bridge-997526 (source=lease)
	I1101 09:41:53.066795   50360 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:41:53.067325   50360 main.go:143] libmachine: unable to find current IP address of domain bridge-997526 in network mk-bridge-997526 (interfaces detected: [])
	I1101 09:41:53.067369   50360 retry.go:31] will retry after 970.0478ms: waiting for domain to come up
	I1101 09:41:54.039660   50360 main.go:143] libmachine: domain bridge-997526 has defined MAC address 52:54:00:c1:da:2f in network mk-bridge-997526
	I1101 09:41:54.040697   50360 main.go:143] libmachine: no network interface addresses found for domain bridge-997526 (source=lease)
	I1101 09:41:54.040715   50360 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:41:54.041177   50360 main.go:143] libmachine: unable to find current IP address of domain bridge-997526 in network mk-bridge-997526 (interfaces detected: [])
	I1101 09:41:54.041240   50360 retry.go:31] will retry after 1.207104225s: waiting for domain to come up
	I1101 09:41:54.093366   48538 addons.go:515] duration metric: took 2.095225521s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 09:41:54.094381   48538 system_pods.go:86] 8 kube-system pods found
	I1101 09:41:54.094404   48538 system_pods.go:89] "coredns-66bc5c9577-wmgmm" [86573251-4122-4419-ab53-18cf682be4fc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:41:54.094410   48538 system_pods.go:89] "coredns-66bc5c9577-z4bgx" [4f9168ff-9ee2-491d-ae58-a0d5b6fa9dd7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:41:54.094416   48538 system_pods.go:89] "etcd-enable-default-cni-997526" [33883119-114d-465b-80d3-6dc12e47b89e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:41:54.094431   48538 system_pods.go:89] "kube-apiserver-enable-default-cni-997526" [3cc573e2-ea6e-43da-8edb-7a3dcfeeb669] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:41:54.094435   48538 system_pods.go:89] "kube-controller-manager-enable-default-cni-997526" [8780c860-fcfb-48f2-8949-c25978830ff3] Running
	I1101 09:41:54.094439   48538 system_pods.go:89] "kube-proxy-2w945" [47f44d3f-bc82-4fb9-8d88-9857adbd1b14] Running
	I1101 09:41:54.094442   48538 system_pods.go:89] "kube-scheduler-enable-default-cni-997526" [d9ab1dd2-1c77-46ec-b094-69b953f0b34f] Running
	I1101 09:41:54.094446   48538 system_pods.go:89] "storage-provisioner" [7555fdeb-a618-4450-afa5-06588e13f947] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:41:54.094467   48538 retry.go:31] will retry after 244.933692ms: missing components: kube-dns
	I1101 09:41:54.345392   48538 system_pods.go:86] 8 kube-system pods found
	I1101 09:41:54.345421   48538 system_pods.go:89] "coredns-66bc5c9577-wmgmm" [86573251-4122-4419-ab53-18cf682be4fc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:41:54.345428   48538 system_pods.go:89] "coredns-66bc5c9577-z4bgx" [4f9168ff-9ee2-491d-ae58-a0d5b6fa9dd7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:41:54.345436   48538 system_pods.go:89] "etcd-enable-default-cni-997526" [33883119-114d-465b-80d3-6dc12e47b89e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:41:54.345445   48538 system_pods.go:89] "kube-apiserver-enable-default-cni-997526" [3cc573e2-ea6e-43da-8edb-7a3dcfeeb669] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:41:54.345460   48538 system_pods.go:89] "kube-controller-manager-enable-default-cni-997526" [8780c860-fcfb-48f2-8949-c25978830ff3] Running
	I1101 09:41:54.345465   48538 system_pods.go:89] "kube-proxy-2w945" [47f44d3f-bc82-4fb9-8d88-9857adbd1b14] Running
	I1101 09:41:54.345470   48538 system_pods.go:89] "kube-scheduler-enable-default-cni-997526" [d9ab1dd2-1c77-46ec-b094-69b953f0b34f] Running
	I1101 09:41:54.345486   48538 system_pods.go:89] "storage-provisioner" [7555fdeb-a618-4450-afa5-06588e13f947] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:41:54.345507   48538 retry.go:31] will retry after 331.634018ms: missing components: kube-dns
	I1101 09:41:54.686987   48538 system_pods.go:86] 8 kube-system pods found
	I1101 09:41:54.687026   48538 system_pods.go:89] "coredns-66bc5c9577-wmgmm" [86573251-4122-4419-ab53-18cf682be4fc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:41:54.687036   48538 system_pods.go:89] "coredns-66bc5c9577-z4bgx" [4f9168ff-9ee2-491d-ae58-a0d5b6fa9dd7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:41:54.687052   48538 system_pods.go:89] "etcd-enable-default-cni-997526" [33883119-114d-465b-80d3-6dc12e47b89e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:41:54.687064   48538 system_pods.go:89] "kube-apiserver-enable-default-cni-997526" [3cc573e2-ea6e-43da-8edb-7a3dcfeeb669] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:41:54.687073   48538 system_pods.go:89] "kube-controller-manager-enable-default-cni-997526" [8780c860-fcfb-48f2-8949-c25978830ff3] Running
	I1101 09:41:54.687083   48538 system_pods.go:89] "kube-proxy-2w945" [47f44d3f-bc82-4fb9-8d88-9857adbd1b14] Running
	I1101 09:41:54.687088   48538 system_pods.go:89] "kube-scheduler-enable-default-cni-997526" [d9ab1dd2-1c77-46ec-b094-69b953f0b34f] Running
	I1101 09:41:54.687096   48538 system_pods.go:89] "storage-provisioner" [7555fdeb-a618-4450-afa5-06588e13f947] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:41:54.687113   48538 retry.go:31] will retry after 430.433638ms: missing components: kube-dns
	I1101 09:41:50.777977   49576 crio.go:462] duration metric: took 1.597687133s to copy over tarball
	I1101 09:41:50.778068   49576 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1101 09:41:52.708597   49576 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.930494971s)
	I1101 09:41:52.708643   49576 crio.go:469] duration metric: took 1.930642104s to extract the tarball
	I1101 09:41:52.708651   49576 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1101 09:41:52.753077   49576 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:41:52.806431   49576 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:41:52.806462   49576 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:41:52.806473   49576 kubeadm.go:935] updating node { 192.168.83.140 8443 v1.34.1 crio true true} ...
	I1101 09:41:52.806582   49576 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-997526 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.83.140
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:flannel-997526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I1101 09:41:52.806678   49576 ssh_runner.go:195] Run: crio config
	I1101 09:41:52.863005   49576 cni.go:84] Creating CNI manager for "flannel"
	I1101 09:41:52.863045   49576 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:41:52.863074   49576 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.140 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-997526 NodeName:flannel-997526 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.140"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.140 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:41:52.863244   49576 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.140
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-997526"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.83.140"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.140"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:41:52.863323   49576 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:41:52.876965   49576 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:41:52.877043   49576 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:41:52.890644   49576 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I1101 09:41:52.915648   49576 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:41:52.943561   49576 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1101 09:41:52.968320   49576 ssh_runner.go:195] Run: grep 192.168.83.140	control-plane.minikube.internal$ /etc/hosts
	I1101 09:41:52.974266   49576 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.140	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:41:52.991069   49576 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:41:53.150988   49576 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:41:53.202374   49576 certs.go:69] Setting up /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/flannel-997526 for IP: 192.168.83.140
	I1101 09:41:53.202397   49576 certs.go:195] generating shared ca certs ...
	I1101 09:41:53.202414   49576 certs.go:227] acquiring lock for ca certs: {Name:mk23a33d19209ad24f4406326ada43ab5cb57960 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:41:53.202598   49576 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21835-5912/.minikube/ca.key
	I1101 09:41:53.202686   49576 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21835-5912/.minikube/proxy-client-ca.key
	I1101 09:41:53.202707   49576 certs.go:257] generating profile certs ...
	I1101 09:41:53.202781   49576 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/flannel-997526/client.key
	I1101 09:41:53.202799   49576 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/flannel-997526/client.crt with IP's: []
	I1101 09:41:53.494290   49576 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/flannel-997526/client.crt ...
	I1101 09:41:53.494322   49576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/flannel-997526/client.crt: {Name:mk705251af4f1067c3819e76cb0fee338b746082 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:41:53.494513   49576 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/flannel-997526/client.key ...
	I1101 09:41:53.494528   49576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/flannel-997526/client.key: {Name:mkd5293730f518470852a14eb9a7f0e622b819b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:41:53.494636   49576 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/flannel-997526/apiserver.key.533b2d0b
	I1101 09:41:53.494655   49576 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/flannel-997526/apiserver.crt.533b2d0b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.83.140]
	I1101 09:41:53.935268   49576 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/flannel-997526/apiserver.crt.533b2d0b ...
	I1101 09:41:53.935293   49576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/flannel-997526/apiserver.crt.533b2d0b: {Name:mk481d8f978b3590608b2392cc0583fb38aae834 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:41:53.935447   49576 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/flannel-997526/apiserver.key.533b2d0b ...
	I1101 09:41:53.935463   49576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/flannel-997526/apiserver.key.533b2d0b: {Name:mk0a322e13efeb86b871af24b4193f4d9d530ee5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:41:53.935577   49576 certs.go:382] copying /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/flannel-997526/apiserver.crt.533b2d0b -> /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/flannel-997526/apiserver.crt
	I1101 09:41:53.935666   49576 certs.go:386] copying /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/flannel-997526/apiserver.key.533b2d0b -> /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/flannel-997526/apiserver.key
	I1101 09:41:53.935740   49576 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/flannel-997526/proxy-client.key
	I1101 09:41:53.935765   49576 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/flannel-997526/proxy-client.crt with IP's: []
	I1101 09:41:54.650231   49576 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/flannel-997526/proxy-client.crt ...
	I1101 09:41:54.650259   49576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/flannel-997526/proxy-client.crt: {Name:mk5de6239907d8c9e316b1c752c4641aaa2e5108 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:41:54.650466   49576 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/flannel-997526/proxy-client.key ...
	I1101 09:41:54.650481   49576 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/flannel-997526/proxy-client.key: {Name:mkce53f829412f0839014f6b8e91ef8f335b6465 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:41:54.650654   49576 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/9793.pem (1338 bytes)
	W1101 09:41:54.650697   49576 certs.go:480] ignoring /home/jenkins/minikube-integration/21835-5912/.minikube/certs/9793_empty.pem, impossibly tiny 0 bytes
	I1101 09:41:54.650710   49576 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 09:41:54.650745   49576 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca.pem (1082 bytes)
	I1101 09:41:54.650774   49576 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:41:54.650793   49576 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/key.pem (1679 bytes)
	I1101 09:41:54.650844   49576 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5912/.minikube/files/etc/ssl/certs/97932.pem (1708 bytes)
	I1101 09:41:54.651479   49576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:41:54.686796   49576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 09:41:54.732930   49576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:41:54.763129   49576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 09:41:54.799995   49576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/flannel-997526/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1101 09:41:54.837951   49576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/flannel-997526/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 09:41:54.927347   49576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/flannel-997526/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:41:55.040305   49576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/flannel-997526/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 09:41:55.069964   49576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:41:55.100898   49576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/certs/9793.pem --> /usr/share/ca-certificates/9793.pem (1338 bytes)
	I1101 09:41:55.135582   49576 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/files/etc/ssl/certs/97932.pem --> /usr/share/ca-certificates/97932.pem (1708 bytes)
	I1101 09:41:56.278998   36048 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.39.77:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.39.77:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1101 09:41:56.279155   36048 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1101 09:41:56.283487   36048 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 09:41:56.283562   36048 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 09:41:56.283665   36048 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 09:41:56.283842   36048 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 09:41:56.283967   36048 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 09:41:56.284050   36048 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 09:41:56.410410   36048 out.go:252]   - Generating certificates and keys ...
	I1101 09:41:56.410534   36048 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 09:41:56.410632   36048 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 09:41:56.410758   36048 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1101 09:41:56.410856   36048 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1101 09:41:56.411006   36048 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1101 09:41:56.411118   36048 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1101 09:41:56.411228   36048 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1101 09:41:56.411337   36048 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1101 09:41:56.411466   36048 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1101 09:41:56.411597   36048 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1101 09:41:56.411655   36048 kubeadm.go:319] [certs] Using the existing "sa" key
	I1101 09:41:56.411753   36048 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 09:41:56.411831   36048 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 09:41:56.411910   36048 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 09:41:56.411998   36048 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 09:41:56.412092   36048 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 09:41:56.412172   36048 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 09:41:56.412313   36048 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 09:41:56.412420   36048 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 09:41:56.426726   36048 out.go:252]   - Booting up control plane ...
	I1101 09:41:56.426878   36048 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 09:41:56.427021   36048 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 09:41:56.427130   36048 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 09:41:56.427278   36048 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 09:41:56.427429   36048 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 09:41:56.427586   36048 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 09:41:56.427730   36048 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 09:41:56.427805   36048 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 09:41:56.427969   36048 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 09:41:56.428122   36048 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 09:41:56.428242   36048 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001912783s
	I1101 09:41:56.428378   36048 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 09:41:56.428513   36048 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.39.77:8443/livez
	I1101 09:41:56.428648   36048 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 09:41:56.428769   36048 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 09:41:56.428897   36048 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 32.532039252s
	I1101 09:41:56.429015   36048 kubeadm.go:319] [control-plane-check] kube-apiserver is not healthy after 4m0.000330252s
	I1101 09:41:56.429157   36048 kubeadm.go:319] [control-plane-check] kube-controller-manager is not healthy after 4m0.000728673s
	I1101 09:41:56.429168   36048 kubeadm.go:319] 
	I1101 09:41:56.429317   36048 kubeadm.go:319] A control plane component may have crashed or exited when started by the container runtime.
	I1101 09:41:56.429434   36048 kubeadm.go:319] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1101 09:41:56.429567   36048 kubeadm.go:319] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1101 09:41:56.429705   36048 kubeadm.go:319] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1101 09:41:56.429811   36048 kubeadm.go:319] 	Once you have found the failing container, you can inspect its logs with:
	I1101 09:41:56.429961   36048 kubeadm.go:319] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1101 09:41:56.430023   36048 kubeadm.go:319] 
	I1101 09:41:56.430050   36048 kubeadm.go:403] duration metric: took 12m17.389796935s to StartCluster
	I1101 09:41:56.430106   36048 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:41:56.430170   36048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:41:56.491154   36048 cri.go:89] found id: "9fd89487ba0f1ed6bbe64879d86b816130f23ac3913efa2ec2e8595222a81a14"
	I1101 09:41:56.491178   36048 cri.go:89] found id: ""
	I1101 09:41:56.491187   36048 logs.go:282] 1 containers: [9fd89487ba0f1ed6bbe64879d86b816130f23ac3913efa2ec2e8595222a81a14]
	I1101 09:41:56.491266   36048 ssh_runner.go:195] Run: which crictl
	I1101 09:41:56.497604   36048 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:41:56.497682   36048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:41:56.540153   36048 cri.go:89] found id: ""
	I1101 09:41:56.540182   36048 logs.go:282] 0 containers: []
	W1101 09:41:56.540195   36048 logs.go:284] No container was found matching "etcd"
	I1101 09:41:56.540223   36048 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:41:56.540284   36048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:41:56.588578   36048 cri.go:89] found id: ""
	I1101 09:41:56.588604   36048 logs.go:282] 0 containers: []
	W1101 09:41:56.588615   36048 logs.go:284] No container was found matching "coredns"
	I1101 09:41:56.588622   36048 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:41:56.588686   36048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:41:56.629691   36048 cri.go:89] found id: "9615698a839800fe1fb1c6f88700bcc7d1a39579a15b9d97eebf9ca04925be6c"
	I1101 09:41:56.629718   36048 cri.go:89] found id: ""
	I1101 09:41:56.629728   36048 logs.go:282] 1 containers: [9615698a839800fe1fb1c6f88700bcc7d1a39579a15b9d97eebf9ca04925be6c]
	I1101 09:41:56.629792   36048 ssh_runner.go:195] Run: which crictl
	I1101 09:41:56.634572   36048 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:41:56.634642   36048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:41:56.672964   36048 cri.go:89] found id: ""
	I1101 09:41:56.672993   36048 logs.go:282] 0 containers: []
	W1101 09:41:56.673005   36048 logs.go:284] No container was found matching "kube-proxy"
	I1101 09:41:56.673013   36048 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:41:56.673075   36048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:41:56.727080   36048 cri.go:89] found id: ""
	I1101 09:41:56.727116   36048 logs.go:282] 0 containers: []
	W1101 09:41:56.727128   36048 logs.go:284] No container was found matching "kube-controller-manager"
	I1101 09:41:56.727136   36048 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:41:56.727219   36048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:41:56.766390   36048 cri.go:89] found id: ""
	I1101 09:41:56.766423   36048 logs.go:282] 0 containers: []
	W1101 09:41:56.766435   36048 logs.go:284] No container was found matching "kindnet"
	I1101 09:41:56.766443   36048 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:41:56.766511   36048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:41:56.811877   36048 cri.go:89] found id: ""
	I1101 09:41:56.811905   36048 logs.go:282] 0 containers: []
	W1101 09:41:56.811915   36048 logs.go:284] No container was found matching "storage-provisioner"
	I1101 09:41:56.811928   36048 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:41:56.811943   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 09:41:57.042060   36048 logs.go:123] Gathering logs for container status ...
	I1101 09:41:57.042096   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 09:41:57.084932   36048 logs.go:123] Gathering logs for kubelet ...
	I1101 09:41:57.084961   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 09:41:57.203137   36048 logs.go:123] Gathering logs for dmesg ...
	I1101 09:41:57.203174   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:41:57.226686   36048 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:41:57.226723   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 09:41:57.307841   36048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 09:41:57.307866   36048 logs.go:123] Gathering logs for kube-apiserver [9fd89487ba0f1ed6bbe64879d86b816130f23ac3913efa2ec2e8595222a81a14] ...
	I1101 09:41:57.307916   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9fd89487ba0f1ed6bbe64879d86b816130f23ac3913efa2ec2e8595222a81a14"
	I1101 09:41:57.361200   36048 logs.go:123] Gathering logs for kube-scheduler [9615698a839800fe1fb1c6f88700bcc7d1a39579a15b9d97eebf9ca04925be6c] ...
	I1101 09:41:57.361263   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9615698a839800fe1fb1c6f88700bcc7d1a39579a15b9d97eebf9ca04925be6c"
	W1101 09:41:57.445148   36048 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001912783s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.39.77:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is healthy after 32.532039252s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000330252s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000728673s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.39.77:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.39.77:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1101 09:41:57.445265   36048 out.go:285] * 
	W1101 09:41:57.445340   36048 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001912783s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.39.77:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is healthy after 32.532039252s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000330252s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000728673s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.39.77:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.39.77:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1101 09:41:57.445364   36048 out.go:285] * 
	W1101 09:41:57.447941   36048 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:41:57.451981   36048 out.go:203] 
	W1101 09:41:57.453371   36048 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.001912783s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.39.77:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is healthy after 32.532039252s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000330252s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000728673s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.39.77:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.39.77:8443: connect: connection refused, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1101 09:41:57.453409   36048 out.go:285] * 
	I1101 09:41:57.455145   36048 out.go:203] 
	
	
	==> CRI-O <==
	Nov 01 09:41:58 kubernetes-upgrade-133315 crio[3238]: time="2025-11-01 09:41:58.175161116Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761990118175086115,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e27c750e-b787-47cd-9a54-96bcdb746d10 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:41:58 kubernetes-upgrade-133315 crio[3238]: time="2025-11-01 09:41:58.176135970Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=69c29108-4b17-4414-9497-efd85438bccf name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:41:58 kubernetes-upgrade-133315 crio[3238]: time="2025-11-01 09:41:58.176202227Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=69c29108-4b17-4414-9497-efd85438bccf name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:41:58 kubernetes-upgrade-133315 crio[3238]: time="2025-11-01 09:41:58.176460627Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9fd89487ba0f1ed6bbe64879d86b816130f23ac3913efa2ec2e8595222a81a14,PodSandboxId:59d9c7f5406f4f38804e60ed76e8deb19d69410017b7704721923249a53fee50,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761990052182239425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-133315,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10d84340e4ceb7631a781e37091d4ffe,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":
\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9615698a839800fe1fb1c6f88700bcc7d1a39579a15b9d97eebf9ca04925be6c,PodSandboxId:35f787e95b4e4a6a68cfbe8841428cd9c2a32404d43456249cc405a9219d526c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761989876839430031,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-133315,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bf08da53f47be4357c2e8703112e
14c,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=69c29108-4b17-4414-9497-efd85438bccf name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:41:58 kubernetes-upgrade-133315 crio[3238]: time="2025-11-01 09:41:58.218839197Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9474bfbc-ae3b-41fa-9c22-4f2a8be34231 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:41:58 kubernetes-upgrade-133315 crio[3238]: time="2025-11-01 09:41:58.218917296Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9474bfbc-ae3b-41fa-9c22-4f2a8be34231 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:41:58 kubernetes-upgrade-133315 crio[3238]: time="2025-11-01 09:41:58.220394525Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4162b13f-7c9c-481c-bd84-a71db9ceeb1d name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:41:58 kubernetes-upgrade-133315 crio[3238]: time="2025-11-01 09:41:58.220838885Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761990118220817225,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4162b13f-7c9c-481c-bd84-a71db9ceeb1d name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:41:58 kubernetes-upgrade-133315 crio[3238]: time="2025-11-01 09:41:58.221859993Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b6ff30cc-3c1b-421d-9e2f-f5405a1239a4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:41:58 kubernetes-upgrade-133315 crio[3238]: time="2025-11-01 09:41:58.221926307Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b6ff30cc-3c1b-421d-9e2f-f5405a1239a4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:41:58 kubernetes-upgrade-133315 crio[3238]: time="2025-11-01 09:41:58.222015337Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9fd89487ba0f1ed6bbe64879d86b816130f23ac3913efa2ec2e8595222a81a14,PodSandboxId:59d9c7f5406f4f38804e60ed76e8deb19d69410017b7704721923249a53fee50,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761990052182239425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-133315,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10d84340e4ceb7631a781e37091d4ffe,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":
\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9615698a839800fe1fb1c6f88700bcc7d1a39579a15b9d97eebf9ca04925be6c,PodSandboxId:35f787e95b4e4a6a68cfbe8841428cd9c2a32404d43456249cc405a9219d526c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761989876839430031,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-133315,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bf08da53f47be4357c2e8703112e
14c,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b6ff30cc-3c1b-421d-9e2f-f5405a1239a4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:41:58 kubernetes-upgrade-133315 crio[3238]: time="2025-11-01 09:41:58.261987313Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=52391d5b-f528-4067-afe0-44b866a8cec2 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:41:58 kubernetes-upgrade-133315 crio[3238]: time="2025-11-01 09:41:58.262100594Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=52391d5b-f528-4067-afe0-44b866a8cec2 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:41:58 kubernetes-upgrade-133315 crio[3238]: time="2025-11-01 09:41:58.263915434Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b10ecb56-af39-401a-a85b-ef9ddaf208f7 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:41:58 kubernetes-upgrade-133315 crio[3238]: time="2025-11-01 09:41:58.264623387Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761990118264588385,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b10ecb56-af39-401a-a85b-ef9ddaf208f7 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:41:58 kubernetes-upgrade-133315 crio[3238]: time="2025-11-01 09:41:58.265334499Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ada09e95-83a1-4bcd-b181-934d0194e9db name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:41:58 kubernetes-upgrade-133315 crio[3238]: time="2025-11-01 09:41:58.265404106Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ada09e95-83a1-4bcd-b181-934d0194e9db name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:41:58 kubernetes-upgrade-133315 crio[3238]: time="2025-11-01 09:41:58.265507619Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9fd89487ba0f1ed6bbe64879d86b816130f23ac3913efa2ec2e8595222a81a14,PodSandboxId:59d9c7f5406f4f38804e60ed76e8deb19d69410017b7704721923249a53fee50,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761990052182239425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-133315,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10d84340e4ceb7631a781e37091d4ffe,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":
\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9615698a839800fe1fb1c6f88700bcc7d1a39579a15b9d97eebf9ca04925be6c,PodSandboxId:35f787e95b4e4a6a68cfbe8841428cd9c2a32404d43456249cc405a9219d526c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761989876839430031,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-133315,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bf08da53f47be4357c2e8703112e
14c,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ada09e95-83a1-4bcd-b181-934d0194e9db name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:41:58 kubernetes-upgrade-133315 crio[3238]: time="2025-11-01 09:41:58.305798620Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=839a700c-d550-4eeb-bb42-85ccaa180d48 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:41:58 kubernetes-upgrade-133315 crio[3238]: time="2025-11-01 09:41:58.305918167Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=839a700c-d550-4eeb-bb42-85ccaa180d48 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:41:58 kubernetes-upgrade-133315 crio[3238]: time="2025-11-01 09:41:58.307315860Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fe6ad3dd-f34d-4b6d-9a86-e7dda54fd0ff name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:41:58 kubernetes-upgrade-133315 crio[3238]: time="2025-11-01 09:41:58.308364993Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761990118308335165,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fe6ad3dd-f34d-4b6d-9a86-e7dda54fd0ff name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:41:58 kubernetes-upgrade-133315 crio[3238]: time="2025-11-01 09:41:58.309462050Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e3f818b0-ed0b-4810-81e2-3618c00b80d1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:41:58 kubernetes-upgrade-133315 crio[3238]: time="2025-11-01 09:41:58.309603257Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e3f818b0-ed0b-4810-81e2-3618c00b80d1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:41:58 kubernetes-upgrade-133315 crio[3238]: time="2025-11-01 09:41:58.309728521Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9fd89487ba0f1ed6bbe64879d86b816130f23ac3913efa2ec2e8595222a81a14,PodSandboxId:59d9c7f5406f4f38804e60ed76e8deb19d69410017b7704721923249a53fee50,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:15,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761990052182239425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-133315,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10d84340e4ceb7631a781e37091d4ffe,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":
\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 15,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9615698a839800fe1fb1c6f88700bcc7d1a39579a15b9d97eebf9ca04925be6c,PodSandboxId:35f787e95b4e4a6a68cfbe8841428cd9c2a32404d43456249cc405a9219d526c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:4,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761989876839430031,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-133315,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2bf08da53f47be4357c2e8703112e
14c,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e3f818b0-ed0b-4810-81e2-3618c00b80d1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                ATTEMPT             POD ID              POD
	9fd89487ba0f1       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   About a minute ago   Exited              kube-apiserver      15                  59d9c7f5406f4       kube-apiserver-kubernetes-upgrade-133315
	9615698a83980       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   4 minutes ago        Running             kube-scheduler      4                   35f787e95b4e4       kube-scheduler-kubernetes-upgrade-133315
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000021] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.115431] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.111821] kauditd_printk_skb: 74 callbacks suppressed
	[  +2.908997] kauditd_printk_skb: 169 callbacks suppressed
	[  +3.789326] kauditd_printk_skb: 20 callbacks suppressed
	[  +0.051141] kauditd_printk_skb: 125 callbacks suppressed
	[Nov 1 09:28] kauditd_printk_skb: 63 callbacks suppressed
	[Nov 1 09:29] kauditd_printk_skb: 330 callbacks suppressed
	[  +1.275418] kauditd_printk_skb: 237 callbacks suppressed
	[Nov 1 09:30] kauditd_printk_skb: 5 callbacks suppressed
	[Nov 1 09:31] kauditd_printk_skb: 5 callbacks suppressed
	[Nov 1 09:32] kauditd_printk_skb: 9 callbacks suppressed
	[ +22.041433] kauditd_printk_skb: 5 callbacks suppressed
	[Nov 1 09:33] kauditd_printk_skb: 32 callbacks suppressed
	[Nov 1 09:34] kauditd_printk_skb: 80 callbacks suppressed
	[ +21.160972] kauditd_printk_skb: 5 callbacks suppressed
	[Nov 1 09:35] kauditd_printk_skb: 5 callbacks suppressed
	[Nov 1 09:36] kauditd_printk_skb: 5 callbacks suppressed
	[Nov 1 09:37] kauditd_printk_skb: 5 callbacks suppressed
	[Nov 1 09:38] kauditd_printk_skb: 108 callbacks suppressed
	[ +21.156259] kauditd_printk_skb: 5 callbacks suppressed
	[Nov 1 09:39] kauditd_printk_skb: 5 callbacks suppressed
	[Nov 1 09:40] kauditd_printk_skb: 5 callbacks suppressed
	[Nov 1 09:41] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> kernel <==
	 09:41:58 up 15 min,  0 users,  load average: 0.03, 0.11, 0.11
	Linux kubernetes-upgrade-133315 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Oct 28 16:58:05 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [9fd89487ba0f1ed6bbe64879d86b816130f23ac3913efa2ec2e8595222a81a14] <==
	W1101 09:40:53.798700       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:40:53.798763       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I1101 09:40:53.799352       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1101 09:40:53.808329       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 09:40:53.818601       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1101 09:40:53.818684       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1101 09:40:53.818995       1 instance.go:239] Using reconciler: lease
	W1101 09:40:53.821225       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1101 09:40:53.821992       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:40:54.799096       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:40:54.799096       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:40:54.823435       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:40:56.171016       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:40:56.568593       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:40:56.580218       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:40:58.333013       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:40:58.750412       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:40:59.245625       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:41:02.042751       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:41:02.980351       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:41:03.853025       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:41:09.903405       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:41:10.125332       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:41:11.390726       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F1101 09:41:13.820898       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-scheduler [9615698a839800fe1fb1c6f88700bcc7d1a39579a15b9d97eebf9ca04925be6c] <==
	E1101 09:41:06.596452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.39.77:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 09:41:08.601361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.39.77:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:41:09.241175       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.77:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:41:10.963656       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.39.77:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:41:12.724127       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.39.77:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 09:41:14.827131       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.39.77:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.77:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.77:41470->192.168.39.77:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:41:14.827131       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.39.77:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.77:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.39.77:37396->192.168.39.77:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:41:25.163346       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.39.77:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.77:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:41:25.313835       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.39.77:8443/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.77:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:41:29.404654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.39.77:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.39.77:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:41:35.752491       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.39.77:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.77:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:41:39.583013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.39.77:8443/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.77:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:41:40.051030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.77:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.77:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:41:41.651207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.39.77:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.39.77:8443: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1101 09:41:42.423708       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.39.77:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.77:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:41:46.640473       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.39.77:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.77:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:41:47.148334       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.77:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.77:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:41:47.552451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.39.77:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.39.77:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:41:49.394704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.77:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.77:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:41:50.400730       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.39.77:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.77:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 09:41:51.227536       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.39.77:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.77:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:41:53.352154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.39.77:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.77:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:41:54.882132       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.39.77:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.77:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:41:54.965722       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.39.77:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.39.77:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:41:55.706397       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.39.77:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.77:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	
	
	==> kubelet <==
	Nov 01 09:41:44 kubernetes-upgrade-133315 kubelet[10136]: E1101 09:41:44.171510   10136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.77:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-133315?timeout=10s\": dial tcp 192.168.39.77:8443: connect: connection refused" interval="7s"
	Nov 01 09:41:45 kubernetes-upgrade-133315 kubelet[10136]: E1101 09:41:45.170526   10136 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"kubernetes-upgrade-133315\" not found" node="kubernetes-upgrade-133315"
	Nov 01 09:41:45 kubernetes-upgrade-133315 kubelet[10136]: E1101 09:41:45.187065   10136 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-133315_kube-system_1ee54dcefb0a95db4c2e679b244e8912_1\" is already in use by acc43a38b55147cbf92475ac79ebec04dd1d93b3a246f94b63bf4f0c8d413648. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="a0cdfde6f083a573e6cc28b8fd042e76698023f05d3e2fc305777c80007222ee"
	Nov 01 09:41:45 kubernetes-upgrade-133315 kubelet[10136]: E1101 09:41:45.187209   10136 kuberuntime_manager.go:1449] "Unhandled Error" err="container kube-controller-manager start failed in pod kube-controller-manager-kubernetes-upgrade-133315_kube-system(1ee54dcefb0a95db4c2e679b244e8912): CreateContainerError: the container name \"k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-133315_kube-system_1ee54dcefb0a95db4c2e679b244e8912_1\" is already in use by acc43a38b55147cbf92475ac79ebec04dd1d93b3a246f94b63bf4f0c8d413648. You have to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Nov 01 09:41:45 kubernetes-upgrade-133315 kubelet[10136]: E1101 09:41:45.187355   10136 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"the container name \\\"k8s_kube-controller-manager_kube-controller-manager-kubernetes-upgrade-133315_kube-system_1ee54dcefb0a95db4c2e679b244e8912_1\\\" is already in use by acc43a38b55147cbf92475ac79ebec04dd1d93b3a246f94b63bf4f0c8d413648. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/kube-controller-manager-kubernetes-upgrade-133315" podUID="1ee54dcefb0a95db4c2e679b244e8912"
	Nov 01 09:41:46 kubernetes-upgrade-133315 kubelet[10136]: E1101 09:41:46.263297   10136 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761990106262928186  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Nov 01 09:41:46 kubernetes-upgrade-133315 kubelet[10136]: E1101 09:41:46.263357   10136 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761990106262928186  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Nov 01 09:41:47 kubernetes-upgrade-133315 kubelet[10136]: E1101 09:41:47.119123   10136 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.39.77:8443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes-upgrade-133315&limit=500&resourceVersion=0\": dial tcp 192.168.39.77:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Nov 01 09:41:47 kubernetes-upgrade-133315 kubelet[10136]: E1101 09:41:47.171111   10136 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"kubernetes-upgrade-133315\" not found" node="kubernetes-upgrade-133315"
	Nov 01 09:41:47 kubernetes-upgrade-133315 kubelet[10136]: E1101 09:41:47.180566   10136 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_etcd_etcd-kubernetes-upgrade-133315_kube-system_1974a05b634c0a980af686047d08f676_1\" is already in use by c22266a2a993d55bce5e21d9c7dea684eb168b5dae7037ff98f74810de775b32. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="d27be495bf1e9209856cf7702aa79fe54456c40f85c223b30b30efa416c5f1af"
	Nov 01 09:41:47 kubernetes-upgrade-133315 kubelet[10136]: E1101 09:41:47.180723   10136 kuberuntime_manager.go:1449] "Unhandled Error" err="container etcd start failed in pod etcd-kubernetes-upgrade-133315_kube-system(1974a05b634c0a980af686047d08f676): CreateContainerError: the container name \"k8s_etcd_etcd-kubernetes-upgrade-133315_kube-system_1974a05b634c0a980af686047d08f676_1\" is already in use by c22266a2a993d55bce5e21d9c7dea684eb168b5dae7037ff98f74810de775b32. You have to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Nov 01 09:41:47 kubernetes-upgrade-133315 kubelet[10136]: E1101 09:41:47.180774   10136 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"the container name \\\"k8s_etcd_etcd-kubernetes-upgrade-133315_kube-system_1974a05b634c0a980af686047d08f676_1\\\" is already in use by c22266a2a993d55bce5e21d9c7dea684eb168b5dae7037ff98f74810de775b32. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/etcd-kubernetes-upgrade-133315" podUID="1974a05b634c0a980af686047d08f676"
	Nov 01 09:41:47 kubernetes-upgrade-133315 kubelet[10136]: E1101 09:41:47.697812   10136 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.39.77:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.39.77:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Nov 01 09:41:48 kubernetes-upgrade-133315 kubelet[10136]: E1101 09:41:48.171226   10136 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"kubernetes-upgrade-133315\" not found" node="kubernetes-upgrade-133315"
	Nov 01 09:41:48 kubernetes-upgrade-133315 kubelet[10136]: I1101 09:41:48.171463   10136 scope.go:117] "RemoveContainer" containerID="9fd89487ba0f1ed6bbe64879d86b816130f23ac3913efa2ec2e8595222a81a14"
	Nov 01 09:41:48 kubernetes-upgrade-133315 kubelet[10136]: E1101 09:41:48.171836   10136 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-kubernetes-upgrade-133315_kube-system(10d84340e4ceb7631a781e37091d4ffe)\"" pod="kube-system/kube-apiserver-kubernetes-upgrade-133315" podUID="10d84340e4ceb7631a781e37091d4ffe"
	Nov 01 09:41:50 kubernetes-upgrade-133315 kubelet[10136]: I1101 09:41:50.193555   10136 kubelet_node_status.go:75] "Attempting to register node" node="kubernetes-upgrade-133315"
	Nov 01 09:41:50 kubernetes-upgrade-133315 kubelet[10136]: E1101 09:41:50.194078   10136 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.39.77:8443/api/v1/nodes\": dial tcp 192.168.39.77:8443: connect: connection refused" node="kubernetes-upgrade-133315"
	Nov 01 09:41:51 kubernetes-upgrade-133315 kubelet[10136]: E1101 09:41:51.173051   10136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.77:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-133315?timeout=10s\": dial tcp 192.168.39.77:8443: connect: connection refused" interval="7s"
	Nov 01 09:41:53 kubernetes-upgrade-133315 kubelet[10136]: E1101 09:41:53.831431   10136 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.39.77:8443/api/v1/namespaces/default/events\": dial tcp 192.168.39.77:8443: connect: connection refused" event="&Event{ObjectMeta:{kubernetes-upgrade-133315.1873d874fca0e3f0  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:kubernetes-upgrade-133315,UID:kubernetes-upgrade-133315,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node kubernetes-upgrade-133315 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:kubernetes-upgrade-133315,},FirstTimestamp:2025-11-01 09:37:56.1982044 +0000 UTC m=+0.947896778,LastTimestamp:2025-11-01 09:37:56.1982044 +0000 UTC m=+0.947896778,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:ku
belet,ReportingInstance:kubernetes-upgrade-133315,}"
	Nov 01 09:41:56 kubernetes-upgrade-133315 kubelet[10136]: E1101 09:41:56.267775   10136 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761990116266798645  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Nov 01 09:41:56 kubernetes-upgrade-133315 kubelet[10136]: E1101 09:41:56.267826   10136 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761990116266798645  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Nov 01 09:41:57 kubernetes-upgrade-133315 kubelet[10136]: I1101 09:41:57.196296   10136 kubelet_node_status.go:75] "Attempting to register node" node="kubernetes-upgrade-133315"
	Nov 01 09:41:57 kubernetes-upgrade-133315 kubelet[10136]: E1101 09:41:57.196584   10136 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.39.77:8443/api/v1/nodes\": dial tcp 192.168.39.77:8443: connect: connection refused" node="kubernetes-upgrade-133315"
	Nov 01 09:41:58 kubernetes-upgrade-133315 kubelet[10136]: E1101 09:41:58.174473   10136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.39.77:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-133315?timeout=10s\": dial tcp 192.168.39.77:8443: connect: connection refused" interval="7s"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-133315 -n kubernetes-upgrade-133315
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-133315 -n kubernetes-upgrade-133315: exit status 2 (251.648165ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "kubernetes-upgrade-133315" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-133315" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-133315
--- FAIL: TestKubernetesUpgrade (986.65s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (54.14s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-855890 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-855890 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (50.172312683s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-855890] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21835
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21835-5912/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5912/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-855890" primary control-plane node in "pause-855890" cluster
	* Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-855890" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:30:05.526980   40325 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:30:05.527146   40325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:30:05.527157   40325 out.go:374] Setting ErrFile to fd 2...
	I1101 09:30:05.527163   40325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:30:05.527500   40325 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5912/.minikube/bin
	I1101 09:30:05.528039   40325 out.go:368] Setting JSON to false
	I1101 09:30:05.529284   40325 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4353,"bootTime":1761985053,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:30:05.529409   40325 start.go:143] virtualization: kvm guest
	I1101 09:30:05.531670   40325 out.go:179] * [pause-855890] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:30:05.532891   40325 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 09:30:05.532933   40325 notify.go:221] Checking for updates...
	I1101 09:30:05.535170   40325 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:30:05.536684   40325 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-5912/kubeconfig
	I1101 09:30:05.537817   40325 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5912/.minikube
	I1101 09:30:05.539514   40325 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:30:05.540627   40325 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:30:05.542398   40325 config.go:182] Loaded profile config "pause-855890": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:30:05.543031   40325 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:30:05.588692   40325 out.go:179] * Using the kvm2 driver based on existing profile
	I1101 09:30:05.589889   40325 start.go:309] selected driver: kvm2
	I1101 09:30:05.589923   40325 start.go:930] validating driver "kvm2" against &{Name:pause-855890 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.34.1 ClusterName:pause-855890 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.183 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-insta
ller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:30:05.590143   40325 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:30:05.591702   40325 cni.go:84] Creating CNI manager for ""
	I1101 09:30:05.591781   40325 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 09:30:05.591846   40325 start.go:353] cluster config:
	{Name:pause-855890 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-855890 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.183 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false
portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:30:05.592018   40325 iso.go:125] acquiring lock: {Name:mk345092679db7c379cbaa00125c4f18e2b4a125 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:30:05.593593   40325 out.go:179] * Starting "pause-855890" primary control-plane node in "pause-855890" cluster
	I1101 09:30:05.594827   40325 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:30:05.594886   40325 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-5912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 09:30:05.594903   40325 cache.go:59] Caching tarball of preloaded images
	I1101 09:30:05.595020   40325 preload.go:233] Found /home/jenkins/minikube-integration/21835-5912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 09:30:05.595036   40325 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:30:05.595244   40325 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/pause-855890/config.json ...
	I1101 09:30:05.595536   40325 start.go:360] acquireMachinesLock for pause-855890: {Name:mk8049b4e421873947dfa0bcd96201ccb1e1825c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 09:30:19.594019   40325 start.go:364] duration metric: took 13.998448068s to acquireMachinesLock for "pause-855890"
	I1101 09:30:19.594088   40325 start.go:96] Skipping create...Using existing machine configuration
	I1101 09:30:19.594096   40325 fix.go:54] fixHost starting: 
	I1101 09:30:19.596629   40325 fix.go:112] recreateIfNeeded on pause-855890: state=Running err=<nil>
	W1101 09:30:19.596678   40325 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 09:30:19.598768   40325 out.go:252] * Updating the running kvm2 "pause-855890" VM ...
	I1101 09:30:19.598817   40325 machine.go:94] provisionDockerMachine start ...
	I1101 09:30:19.603984   40325 main.go:143] libmachine: domain pause-855890 has defined MAC address 52:54:00:d2:ca:5b in network mk-pause-855890
	I1101 09:30:19.604753   40325 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:ca:5b", ip: ""} in network mk-pause-855890: {Iface:virbr2 ExpiryTime:2025-11-01 10:29:01 +0000 UTC Type:0 Mac:52:54:00:d2:ca:5b Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:pause-855890 Clientid:01:52:54:00:d2:ca:5b}
	I1101 09:30:19.604787   40325 main.go:143] libmachine: domain pause-855890 has defined IP address 192.168.50.183 and MAC address 52:54:00:d2:ca:5b in network mk-pause-855890
	I1101 09:30:19.605559   40325 main.go:143] libmachine: Using SSH client type: native
	I1101 09:30:19.605867   40325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.50.183 22 <nil> <nil>}
	I1101 09:30:19.605884   40325 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:30:19.721449   40325 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-855890
	
	I1101 09:30:19.721485   40325 buildroot.go:166] provisioning hostname "pause-855890"
	I1101 09:30:19.725170   40325 main.go:143] libmachine: domain pause-855890 has defined MAC address 52:54:00:d2:ca:5b in network mk-pause-855890
	I1101 09:30:19.725682   40325 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:ca:5b", ip: ""} in network mk-pause-855890: {Iface:virbr2 ExpiryTime:2025-11-01 10:29:01 +0000 UTC Type:0 Mac:52:54:00:d2:ca:5b Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:pause-855890 Clientid:01:52:54:00:d2:ca:5b}
	I1101 09:30:19.725715   40325 main.go:143] libmachine: domain pause-855890 has defined IP address 192.168.50.183 and MAC address 52:54:00:d2:ca:5b in network mk-pause-855890
	I1101 09:30:19.725958   40325 main.go:143] libmachine: Using SSH client type: native
	I1101 09:30:19.726275   40325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.50.183 22 <nil> <nil>}
	I1101 09:30:19.726295   40325 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-855890 && echo "pause-855890" | sudo tee /etc/hostname
	I1101 09:30:19.864791   40325 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-855890
	
	I1101 09:30:19.867773   40325 main.go:143] libmachine: domain pause-855890 has defined MAC address 52:54:00:d2:ca:5b in network mk-pause-855890
	I1101 09:30:19.868182   40325 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:ca:5b", ip: ""} in network mk-pause-855890: {Iface:virbr2 ExpiryTime:2025-11-01 10:29:01 +0000 UTC Type:0 Mac:52:54:00:d2:ca:5b Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:pause-855890 Clientid:01:52:54:00:d2:ca:5b}
	I1101 09:30:19.868219   40325 main.go:143] libmachine: domain pause-855890 has defined IP address 192.168.50.183 and MAC address 52:54:00:d2:ca:5b in network mk-pause-855890
	I1101 09:30:19.868410   40325 main.go:143] libmachine: Using SSH client type: native
	I1101 09:30:19.868655   40325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.50.183 22 <nil> <nil>}
	I1101 09:30:19.868679   40325 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-855890' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-855890/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-855890' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:30:19.982129   40325 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:30:19.982158   40325 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/21835-5912/.minikube CaCertPath:/home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21835-5912/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21835-5912/.minikube}
	I1101 09:30:19.982203   40325 buildroot.go:174] setting up certificates
	I1101 09:30:19.982228   40325 provision.go:84] configureAuth start
	I1101 09:30:19.985668   40325 main.go:143] libmachine: domain pause-855890 has defined MAC address 52:54:00:d2:ca:5b in network mk-pause-855890
	I1101 09:30:19.986384   40325 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:ca:5b", ip: ""} in network mk-pause-855890: {Iface:virbr2 ExpiryTime:2025-11-01 10:29:01 +0000 UTC Type:0 Mac:52:54:00:d2:ca:5b Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:pause-855890 Clientid:01:52:54:00:d2:ca:5b}
	I1101 09:30:19.986422   40325 main.go:143] libmachine: domain pause-855890 has defined IP address 192.168.50.183 and MAC address 52:54:00:d2:ca:5b in network mk-pause-855890
	I1101 09:30:19.990373   40325 main.go:143] libmachine: domain pause-855890 has defined MAC address 52:54:00:d2:ca:5b in network mk-pause-855890
	I1101 09:30:19.991123   40325 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:ca:5b", ip: ""} in network mk-pause-855890: {Iface:virbr2 ExpiryTime:2025-11-01 10:29:01 +0000 UTC Type:0 Mac:52:54:00:d2:ca:5b Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:pause-855890 Clientid:01:52:54:00:d2:ca:5b}
	I1101 09:30:19.991158   40325 main.go:143] libmachine: domain pause-855890 has defined IP address 192.168.50.183 and MAC address 52:54:00:d2:ca:5b in network mk-pause-855890
	I1101 09:30:19.991412   40325 provision.go:143] copyHostCerts
	I1101 09:30:19.991485   40325 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5912/.minikube/ca.pem, removing ...
	I1101 09:30:19.991504   40325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5912/.minikube/ca.pem
	I1101 09:30:19.991588   40325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21835-5912/.minikube/ca.pem (1082 bytes)
	I1101 09:30:19.991767   40325 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5912/.minikube/cert.pem, removing ...
	I1101 09:30:19.991783   40325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5912/.minikube/cert.pem
	I1101 09:30:19.991850   40325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21835-5912/.minikube/cert.pem (1123 bytes)
	I1101 09:30:19.991949   40325 exec_runner.go:144] found /home/jenkins/minikube-integration/21835-5912/.minikube/key.pem, removing ...
	I1101 09:30:19.991964   40325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21835-5912/.minikube/key.pem
	I1101 09:30:19.992000   40325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21835-5912/.minikube/key.pem (1679 bytes)
	I1101 09:30:19.992073   40325 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21835-5912/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca-key.pem org=jenkins.pause-855890 san=[127.0.0.1 192.168.50.183 localhost minikube pause-855890]
	I1101 09:30:20.208238   40325 provision.go:177] copyRemoteCerts
	I1101 09:30:20.208290   40325 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:30:20.211539   40325 main.go:143] libmachine: domain pause-855890 has defined MAC address 52:54:00:d2:ca:5b in network mk-pause-855890
	I1101 09:30:20.211945   40325 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:ca:5b", ip: ""} in network mk-pause-855890: {Iface:virbr2 ExpiryTime:2025-11-01 10:29:01 +0000 UTC Type:0 Mac:52:54:00:d2:ca:5b Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:pause-855890 Clientid:01:52:54:00:d2:ca:5b}
	I1101 09:30:20.211969   40325 main.go:143] libmachine: domain pause-855890 has defined IP address 192.168.50.183 and MAC address 52:54:00:d2:ca:5b in network mk-pause-855890
	I1101 09:30:20.212098   40325 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/pause-855890/id_rsa Username:docker}
	I1101 09:30:20.306547   40325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 09:30:20.339715   40325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1101 09:30:20.372911   40325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:30:20.406165   40325 provision.go:87] duration metric: took 423.924191ms to configureAuth
	I1101 09:30:20.406193   40325 buildroot.go:189] setting minikube options for container-runtime
	I1101 09:30:20.406448   40325 config.go:182] Loaded profile config "pause-855890": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:30:20.409680   40325 main.go:143] libmachine: domain pause-855890 has defined MAC address 52:54:00:d2:ca:5b in network mk-pause-855890
	I1101 09:30:20.410113   40325 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:ca:5b", ip: ""} in network mk-pause-855890: {Iface:virbr2 ExpiryTime:2025-11-01 10:29:01 +0000 UTC Type:0 Mac:52:54:00:d2:ca:5b Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:pause-855890 Clientid:01:52:54:00:d2:ca:5b}
	I1101 09:30:20.410144   40325 main.go:143] libmachine: domain pause-855890 has defined IP address 192.168.50.183 and MAC address 52:54:00:d2:ca:5b in network mk-pause-855890
	I1101 09:30:20.410331   40325 main.go:143] libmachine: Using SSH client type: native
	I1101 09:30:20.410581   40325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.50.183 22 <nil> <nil>}
	I1101 09:30:20.410601   40325 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:30:26.022460   40325 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:30:26.022488   40325 machine.go:97] duration metric: took 6.423661953s to provisionDockerMachine
	I1101 09:30:26.022501   40325 start.go:293] postStartSetup for "pause-855890" (driver="kvm2")
	I1101 09:30:26.022513   40325 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:30:26.022571   40325 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:30:26.026095   40325 main.go:143] libmachine: domain pause-855890 has defined MAC address 52:54:00:d2:ca:5b in network mk-pause-855890
	I1101 09:30:26.026594   40325 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:ca:5b", ip: ""} in network mk-pause-855890: {Iface:virbr2 ExpiryTime:2025-11-01 10:29:01 +0000 UTC Type:0 Mac:52:54:00:d2:ca:5b Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:pause-855890 Clientid:01:52:54:00:d2:ca:5b}
	I1101 09:30:26.026631   40325 main.go:143] libmachine: domain pause-855890 has defined IP address 192.168.50.183 and MAC address 52:54:00:d2:ca:5b in network mk-pause-855890
	I1101 09:30:26.026787   40325 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/pause-855890/id_rsa Username:docker}
	I1101 09:30:26.117405   40325 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:30:26.124301   40325 info.go:137] Remote host: Buildroot 2025.02
	I1101 09:30:26.124334   40325 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5912/.minikube/addons for local assets ...
	I1101 09:30:26.124409   40325 filesync.go:126] Scanning /home/jenkins/minikube-integration/21835-5912/.minikube/files for local assets ...
	I1101 09:30:26.124514   40325 filesync.go:149] local asset: /home/jenkins/minikube-integration/21835-5912/.minikube/files/etc/ssl/certs/97932.pem -> 97932.pem in /etc/ssl/certs
	I1101 09:30:26.124631   40325 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:30:26.137792   40325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/files/etc/ssl/certs/97932.pem --> /etc/ssl/certs/97932.pem (1708 bytes)
	I1101 09:30:26.171226   40325 start.go:296] duration metric: took 148.698061ms for postStartSetup
	I1101 09:30:26.171262   40325 fix.go:56] duration metric: took 6.577166078s for fixHost
	I1101 09:30:26.174315   40325 main.go:143] libmachine: domain pause-855890 has defined MAC address 52:54:00:d2:ca:5b in network mk-pause-855890
	I1101 09:30:26.174886   40325 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:ca:5b", ip: ""} in network mk-pause-855890: {Iface:virbr2 ExpiryTime:2025-11-01 10:29:01 +0000 UTC Type:0 Mac:52:54:00:d2:ca:5b Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:pause-855890 Clientid:01:52:54:00:d2:ca:5b}
	I1101 09:30:26.174918   40325 main.go:143] libmachine: domain pause-855890 has defined IP address 192.168.50.183 and MAC address 52:54:00:d2:ca:5b in network mk-pause-855890
	I1101 09:30:26.175126   40325 main.go:143] libmachine: Using SSH client type: native
	I1101 09:30:26.175404   40325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 192.168.50.183 22 <nil> <nil>}
	I1101 09:30:26.175419   40325 main.go:143] libmachine: About to run SSH command:
	date +%s.%N
	I1101 09:30:26.287552   40325 main.go:143] libmachine: SSH cmd err, output: <nil>: 1761989426.282004500
	
	I1101 09:30:26.287580   40325 fix.go:216] guest clock: 1761989426.282004500
	I1101 09:30:26.287591   40325 fix.go:229] Guest: 2025-11-01 09:30:26.2820045 +0000 UTC Remote: 2025-11-01 09:30:26.171265864 +0000 UTC m=+20.709011621 (delta=110.738636ms)
	I1101 09:30:26.287614   40325 fix.go:200] guest clock delta is within tolerance: 110.738636ms
	I1101 09:30:26.287622   40325 start.go:83] releasing machines lock for "pause-855890", held for 6.693565474s
	I1101 09:30:26.290869   40325 main.go:143] libmachine: domain pause-855890 has defined MAC address 52:54:00:d2:ca:5b in network mk-pause-855890
	I1101 09:30:26.291447   40325 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:ca:5b", ip: ""} in network mk-pause-855890: {Iface:virbr2 ExpiryTime:2025-11-01 10:29:01 +0000 UTC Type:0 Mac:52:54:00:d2:ca:5b Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:pause-855890 Clientid:01:52:54:00:d2:ca:5b}
	I1101 09:30:26.291473   40325 main.go:143] libmachine: domain pause-855890 has defined IP address 192.168.50.183 and MAC address 52:54:00:d2:ca:5b in network mk-pause-855890
	I1101 09:30:26.292135   40325 ssh_runner.go:195] Run: cat /version.json
	I1101 09:30:26.292226   40325 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:30:26.295674   40325 main.go:143] libmachine: domain pause-855890 has defined MAC address 52:54:00:d2:ca:5b in network mk-pause-855890
	I1101 09:30:26.296061   40325 main.go:143] libmachine: domain pause-855890 has defined MAC address 52:54:00:d2:ca:5b in network mk-pause-855890
	I1101 09:30:26.296244   40325 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:ca:5b", ip: ""} in network mk-pause-855890: {Iface:virbr2 ExpiryTime:2025-11-01 10:29:01 +0000 UTC Type:0 Mac:52:54:00:d2:ca:5b Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:pause-855890 Clientid:01:52:54:00:d2:ca:5b}
	I1101 09:30:26.296275   40325 main.go:143] libmachine: domain pause-855890 has defined IP address 192.168.50.183 and MAC address 52:54:00:d2:ca:5b in network mk-pause-855890
	I1101 09:30:26.296470   40325 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/pause-855890/id_rsa Username:docker}
	I1101 09:30:26.296522   40325 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:ca:5b", ip: ""} in network mk-pause-855890: {Iface:virbr2 ExpiryTime:2025-11-01 10:29:01 +0000 UTC Type:0 Mac:52:54:00:d2:ca:5b Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:pause-855890 Clientid:01:52:54:00:d2:ca:5b}
	I1101 09:30:26.296558   40325 main.go:143] libmachine: domain pause-855890 has defined IP address 192.168.50.183 and MAC address 52:54:00:d2:ca:5b in network mk-pause-855890
	I1101 09:30:26.296714   40325 sshutil.go:53] new ssh client: &{IP:192.168.50.183 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/pause-855890/id_rsa Username:docker}
	I1101 09:30:26.401189   40325 ssh_runner.go:195] Run: systemctl --version
	I1101 09:30:26.408172   40325 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:30:26.572165   40325 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:30:26.583403   40325 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:30:26.583473   40325 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:30:26.599248   40325 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 09:30:26.599277   40325 start.go:496] detecting cgroup driver to use...
	I1101 09:30:26.599374   40325 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:30:26.621690   40325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:30:26.641025   40325 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:30:26.641112   40325 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:30:26.664402   40325 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:30:26.683171   40325 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:30:26.911267   40325 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:30:27.097197   40325 docker.go:234] disabling docker service ...
	I1101 09:30:27.097299   40325 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:30:27.133814   40325 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:30:27.153827   40325 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:30:27.353178   40325 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:30:27.531720   40325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:30:27.553088   40325 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:30:27.578231   40325 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:30:27.578295   40325 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:30:27.591454   40325 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1101 09:30:27.591523   40325 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:30:27.607260   40325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:30:27.622024   40325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:30:27.636077   40325 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:30:27.651024   40325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:30:27.664446   40325 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:30:27.680233   40325 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:30:27.697098   40325 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:30:27.708589   40325 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:30:27.724194   40325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:30:27.894118   40325 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:30:34.191895   40325 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.297728943s)
	I1101 09:30:34.191928   40325 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:30:34.191972   40325 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:30:34.198199   40325 start.go:564] Will wait 60s for crictl version
	I1101 09:30:34.198296   40325 ssh_runner.go:195] Run: which crictl
	I1101 09:30:34.203917   40325 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1101 09:30:34.254637   40325 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1101 09:30:34.254720   40325 ssh_runner.go:195] Run: crio --version
	I1101 09:30:34.290901   40325 ssh_runner.go:195] Run: crio --version
	I1101 09:30:34.325467   40325 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.29.1 ...
	I1101 09:30:34.329671   40325 main.go:143] libmachine: domain pause-855890 has defined MAC address 52:54:00:d2:ca:5b in network mk-pause-855890
	I1101 09:30:34.330101   40325 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d2:ca:5b", ip: ""} in network mk-pause-855890: {Iface:virbr2 ExpiryTime:2025-11-01 10:29:01 +0000 UTC Type:0 Mac:52:54:00:d2:ca:5b Iaid: IPaddr:192.168.50.183 Prefix:24 Hostname:pause-855890 Clientid:01:52:54:00:d2:ca:5b}
	I1101 09:30:34.330131   40325 main.go:143] libmachine: domain pause-855890 has defined IP address 192.168.50.183 and MAC address 52:54:00:d2:ca:5b in network mk-pause-855890
	I1101 09:30:34.330415   40325 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1101 09:30:34.335576   40325 kubeadm.go:884] updating cluster {Name:pause-855890 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1
ClusterName:pause-855890 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.183 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:30:34.335709   40325 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:30:34.335764   40325 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:30:34.386116   40325 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:30:34.386148   40325 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:30:34.386242   40325 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:30:34.425304   40325 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:30:34.425327   40325 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:30:34.425334   40325 kubeadm.go:935] updating node { 192.168.50.183 8443 v1.34.1 crio true true} ...
	I1101 09:30:34.425425   40325 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-855890 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.183
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-855890 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:30:34.425497   40325 ssh_runner.go:195] Run: crio config
	I1101 09:30:34.477706   40325 cni.go:84] Creating CNI manager for ""
	I1101 09:30:34.477728   40325 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 09:30:34.477744   40325 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:30:34.477766   40325 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.183 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-855890 NodeName:pause-855890 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.183"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.183 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:30:34.477912   40325 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.183
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-855890"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.183"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.183"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:30:34.477984   40325 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:30:34.490515   40325 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:30:34.490588   40325 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:30:34.502727   40325 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1101 09:30:34.526977   40325 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:30:34.548629   40325 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1101 09:30:34.574558   40325 ssh_runner.go:195] Run: grep 192.168.50.183	control-plane.minikube.internal$ /etc/hosts
	I1101 09:30:34.579781   40325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:30:34.755688   40325 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:30:34.775678   40325 certs.go:69] Setting up /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/pause-855890 for IP: 192.168.50.183
	I1101 09:30:34.775703   40325 certs.go:195] generating shared ca certs ...
	I1101 09:30:34.775722   40325 certs.go:227] acquiring lock for ca certs: {Name:mk23a33d19209ad24f4406326ada43ab5cb57960 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:30:34.775924   40325 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21835-5912/.minikube/ca.key
	I1101 09:30:34.775985   40325 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21835-5912/.minikube/proxy-client-ca.key
	I1101 09:30:34.775998   40325 certs.go:257] generating profile certs ...
	I1101 09:30:34.776105   40325 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/pause-855890/client.key
	I1101 09:30:34.776180   40325 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/pause-855890/apiserver.key.762124bc
	I1101 09:30:34.776251   40325 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/pause-855890/proxy-client.key
	I1101 09:30:34.776394   40325 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/9793.pem (1338 bytes)
	W1101 09:30:34.776434   40325 certs.go:480] ignoring /home/jenkins/minikube-integration/21835-5912/.minikube/certs/9793_empty.pem, impossibly tiny 0 bytes
	I1101 09:30:34.776446   40325 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca-key.pem (1679 bytes)
	I1101 09:30:34.776481   40325 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca.pem (1082 bytes)
	I1101 09:30:34.776516   40325 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:30:34.776549   40325 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5912/.minikube/certs/key.pem (1679 bytes)
	I1101 09:30:34.776657   40325 certs.go:484] found cert: /home/jenkins/minikube-integration/21835-5912/.minikube/files/etc/ssl/certs/97932.pem (1708 bytes)
	I1101 09:30:34.777484   40325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:30:34.815528   40325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1101 09:30:34.848695   40325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:30:34.884520   40325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1101 09:30:34.916461   40325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/pause-855890/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1101 09:30:34.950421   40325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/pause-855890/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 09:30:34.982954   40325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/pause-855890/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:30:35.021050   40325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/pause-855890/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 09:30:35.053503   40325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:30:35.087004   40325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/certs/9793.pem --> /usr/share/ca-certificates/9793.pem (1338 bytes)
	I1101 09:30:35.121157   40325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21835-5912/.minikube/files/etc/ssl/certs/97932.pem --> /usr/share/ca-certificates/97932.pem (1708 bytes)
	I1101 09:30:35.158174   40325 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:30:35.181465   40325 ssh_runner.go:195] Run: openssl version
	I1101 09:30:35.188361   40325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:30:35.202536   40325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:30:35.208457   40325 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:30:35.208527   40325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:30:35.216524   40325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:30:35.233361   40325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9793.pem && ln -fs /usr/share/ca-certificates/9793.pem /etc/ssl/certs/9793.pem"
	I1101 09:30:35.258792   40325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9793.pem
	I1101 09:30:35.273154   40325 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 08:38 /usr/share/ca-certificates/9793.pem
	I1101 09:30:35.273236   40325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9793.pem
	I1101 09:30:35.307743   40325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9793.pem /etc/ssl/certs/51391683.0"
	I1101 09:30:35.341088   40325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/97932.pem && ln -fs /usr/share/ca-certificates/97932.pem /etc/ssl/certs/97932.pem"
	I1101 09:30:35.372599   40325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/97932.pem
	I1101 09:30:35.383588   40325 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 08:38 /usr/share/ca-certificates/97932.pem
	I1101 09:30:35.383661   40325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/97932.pem
	I1101 09:30:35.403845   40325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/97932.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:30:35.428135   40325 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:30:35.443782   40325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 09:30:35.466395   40325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 09:30:35.496156   40325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 09:30:35.521896   40325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 09:30:35.553150   40325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 09:30:35.574978   40325 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 09:30:35.589717   40325 kubeadm.go:401] StartCluster: {Name:pause-855890 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 Cl
usterName:pause-855890 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.183 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-
gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:30:35.589858   40325 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:30:35.589962   40325 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:30:35.730111   40325 cri.go:89] found id: "317a9379f524cbb808decb995783218cc7254c0808bd3cdbc03b94f7e63149ea"
	I1101 09:30:35.730138   40325 cri.go:89] found id: "c9ddabcf5ff167d0756b588b55f5f1adea8970f8165ccf6a8500ebbc7da0692d"
	I1101 09:30:35.730144   40325 cri.go:89] found id: "c9c45477ef7a11f3c5384fddc28a961004a06f9e87cb7c0c7faa4bea05d0b7ef"
	I1101 09:30:35.730149   40325 cri.go:89] found id: "08997e7db2aca95d5ecb59b8e74931e0088d93a536271a5655c21a88adf47b67"
	I1101 09:30:35.730153   40325 cri.go:89] found id: "fe4e98bd2bc2d934f1cf0670af24bc144ea2818c8501b0956e2a6840cec5315b"
	I1101 09:30:35.730158   40325 cri.go:89] found id: "fecfbd390ded36fbce68790e962191e98ce02780cad561cb62f869a589724115"
	I1101 09:30:35.730162   40325 cri.go:89] found id: ""
	I1101 09:30:35.730242   40325 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-855890 -n pause-855890
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-855890 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-855890 logs -n 25: (1.460722885s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                          ARGS                                                                                                           │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-997526 sudo docker system info                                                                                                                                                                                │ cilium-997526             │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-997526 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                               │ cilium-997526             │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-997526 sudo systemctl cat cri-docker --no-pager                                                                                                                                                               │ cilium-997526             │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-997526 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                          │ cilium-997526             │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-997526 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                    │ cilium-997526             │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-997526 sudo cri-dockerd --version                                                                                                                                                                             │ cilium-997526             │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │                     │
	│ ssh     │ -p NoKubernetes-709275 sudo systemctl is-active --quiet service kubelet                                                                                                                                                 │ NoKubernetes-709275       │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-997526 sudo systemctl status containerd --all --full --no-pager                                                                                                                                               │ cilium-997526             │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-997526 sudo systemctl cat containerd --no-pager                                                                                                                                                               │ cilium-997526             │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │                     │
	│ delete  │ -p NoKubernetes-709275                                                                                                                                                                                                  │ NoKubernetes-709275       │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:29 UTC │
	│ ssh     │ -p cilium-997526 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                        │ cilium-997526             │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-997526 sudo cat /etc/containerd/config.toml                                                                                                                                                                   │ cilium-997526             │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-997526 sudo containerd config dump                                                                                                                                                                            │ cilium-997526             │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-997526 sudo systemctl status crio --all --full --no-pager                                                                                                                                                     │ cilium-997526             │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-997526 sudo systemctl cat crio --no-pager                                                                                                                                                                     │ cilium-997526             │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-997526 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                           │ cilium-997526             │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-997526 sudo crio config                                                                                                                                                                                       │ cilium-997526             │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │                     │
	│ delete  │ -p cilium-997526                                                                                                                                                                                                        │ cilium-997526             │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:29 UTC │
	│ start   │ -p guest-649821 --no-kubernetes --driver=kvm2  --container-runtime=crio                                                                                                                                                 │ guest-649821              │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:29 UTC │
	│ start   │ -p cert-expiration-602924 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio                                                                                                                    │ cert-expiration-602924    │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:30 UTC │
	│ start   │ -p force-systemd-flag-806647 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                                                               │ force-systemd-flag-806647 │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:30 UTC │
	│ start   │ -p pause-855890 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                          │ pause-855890              │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ 01 Nov 25 09:30 UTC │
	│ ssh     │ force-systemd-flag-806647 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                    │ force-systemd-flag-806647 │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ 01 Nov 25 09:30 UTC │
	│ delete  │ -p force-systemd-flag-806647                                                                                                                                                                                            │ force-systemd-flag-806647 │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ 01 Nov 25 09:30 UTC │
	│ start   │ -p cert-options-414547 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio │ cert-options-414547       │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:30:42
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:30:42.156125   40614 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:30:42.156310   40614 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:30:42.156315   40614 out.go:374] Setting ErrFile to fd 2...
	I1101 09:30:42.156320   40614 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:30:42.156674   40614 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5912/.minikube/bin
	I1101 09:30:42.157507   40614 out.go:368] Setting JSON to false
	I1101 09:30:42.158842   40614 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4389,"bootTime":1761985053,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:30:42.158944   40614 start.go:143] virtualization: kvm guest
	I1101 09:30:42.162111   40614 out.go:179] * [cert-options-414547] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:30:42.163424   40614 notify.go:221] Checking for updates...
	I1101 09:30:42.163462   40614 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 09:30:42.165034   40614 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:30:42.166373   40614 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-5912/kubeconfig
	I1101 09:30:42.167873   40614 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5912/.minikube
	I1101 09:30:42.169360   40614 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:30:42.170870   40614 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:30:42.173039   40614 config.go:182] Loaded profile config "cert-expiration-602924": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:30:42.173198   40614 config.go:182] Loaded profile config "guest-649821": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1101 09:30:42.173363   40614 config.go:182] Loaded profile config "kubernetes-upgrade-133315": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:30:42.173582   40614 config.go:182] Loaded profile config "pause-855890": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:30:42.173766   40614 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:30:42.216396   40614 out.go:179] * Using the kvm2 driver based on user configuration
	I1101 09:30:42.217589   40614 start.go:309] selected driver: kvm2
	I1101 09:30:42.217595   40614 start.go:930] validating driver "kvm2" against <nil>
	I1101 09:30:42.217612   40614 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:30:42.218409   40614 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 09:30:42.218635   40614 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1101 09:30:42.218648   40614 cni.go:84] Creating CNI manager for ""
	I1101 09:30:42.218686   40614 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 09:30:42.218690   40614 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1101 09:30:42.218719   40614 start.go:353] cluster config:
	{Name:cert-options-414547 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-options-414547 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.
1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterva
l:1m0s}
	I1101 09:30:42.218887   40614 iso.go:125] acquiring lock: {Name:mk345092679db7c379cbaa00125c4f18e2b4a125 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:30:42.221326   40614 out.go:179] * Starting "cert-options-414547" primary control-plane node in "cert-options-414547" cluster
	I1101 09:30:42.222521   40614 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:30:42.222565   40614 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-5912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 09:30:42.222585   40614 cache.go:59] Caching tarball of preloaded images
	I1101 09:30:42.222685   40614 preload.go:233] Found /home/jenkins/minikube-integration/21835-5912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 09:30:42.222699   40614 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:30:42.222826   40614 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/cert-options-414547/config.json ...
	I1101 09:30:42.222848   40614 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/cert-options-414547/config.json: {Name:mk3b7e467ef9ca601bd25e5919e28e9954756bf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:30:42.223039   40614 start.go:360] acquireMachinesLock for cert-options-414547: {Name:mk8049b4e421873947dfa0bcd96201ccb1e1825c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 09:30:42.223088   40614 start.go:364] duration metric: took 29.48µs to acquireMachinesLock for "cert-options-414547"
	I1101 09:30:42.223110   40614 start.go:93] Provisioning new machine with config: &{Name:cert-options-414547 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.34.1 ClusterName:cert-options-414547 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8555 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:30:42.223183   40614 start.go:125] createHost starting for "" (driver="kvm2")
	I1101 09:30:40.566999   40325 api_server.go:279] https://192.168.50.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 09:30:40.567047   40325 api_server.go:103] status: https://192.168.50.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 09:30:40.864564   40325 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I1101 09:30:40.876377   40325 api_server.go:279] https://192.168.50.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 09:30:40.876404   40325 api_server.go:103] status: https://192.168.50.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 09:30:41.364042   40325 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I1101 09:30:41.381676   40325 api_server.go:279] https://192.168.50.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 09:30:41.381710   40325 api_server.go:103] status: https://192.168.50.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 09:30:41.864371   40325 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I1101 09:30:41.884606   40325 api_server.go:279] https://192.168.50.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 09:30:41.884639   40325 api_server.go:103] status: https://192.168.50.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 09:30:42.364372   40325 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I1101 09:30:42.370923   40325 api_server.go:279] https://192.168.50.183:8443/healthz returned 200:
	ok
	I1101 09:30:42.380257   40325 api_server.go:141] control plane version: v1.34.1
	I1101 09:30:42.380289   40325 api_server.go:131] duration metric: took 3.016804839s to wait for apiserver health ...
	I1101 09:30:42.380302   40325 cni.go:84] Creating CNI manager for ""
	I1101 09:30:42.380310   40325 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 09:30:42.382748   40325 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 09:30:42.384302   40325 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 09:30:42.403958   40325 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1101 09:30:42.428428   40325 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:30:42.442346   40325 system_pods.go:59] 6 kube-system pods found
	I1101 09:30:42.442385   40325 system_pods.go:61] "coredns-66bc5c9577-czz5l" [0464c9e4-46a6-477e-94b6-fed9a6eb2966] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:30:42.442394   40325 system_pods.go:61] "etcd-pause-855890" [229a3488-3c8b-45a2-8535-95dd1be1dcc0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:30:42.442407   40325 system_pods.go:61] "kube-apiserver-pause-855890" [f6e98a8b-9606-43e9-a2ee-cae88a417568] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:30:42.442418   40325 system_pods.go:61] "kube-controller-manager-pause-855890" [41c64391-6367-45d0-af32-82b61b2e385f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:30:42.442432   40325 system_pods.go:61] "kube-proxy-9dngv" [74e9d2ed-3e06-4b92-b71e-0d3520d7d64b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 09:30:42.442449   40325 system_pods.go:61] "kube-scheduler-pause-855890" [471d7f99-bc78-44a6-9ab3-0004d3b1fd4d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:30:42.442460   40325 system_pods.go:74] duration metric: took 14.007924ms to wait for pod list to return data ...
	I1101 09:30:42.442470   40325 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:30:42.446649   40325 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 09:30:42.446674   40325 node_conditions.go:123] node cpu capacity is 2
	I1101 09:30:42.446684   40325 node_conditions.go:105] duration metric: took 4.210831ms to run NodePressure ...
	I1101 09:30:42.446737   40325 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 09:30:42.740857   40325 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1101 09:30:42.746250   40325 kubeadm.go:744] kubelet initialised
	I1101 09:30:42.746276   40325 kubeadm.go:745] duration metric: took 5.391237ms waiting for restarted kubelet to initialise ...
	I1101 09:30:42.746294   40325 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 09:30:42.768144   40325 ops.go:34] apiserver oom_adj: -16
	I1101 09:30:42.768170   40325 kubeadm.go:602] duration metric: took 6.919299576s to restartPrimaryControlPlane
	I1101 09:30:42.768183   40325 kubeadm.go:403] duration metric: took 7.178477096s to StartCluster
	I1101 09:30:42.768203   40325 settings.go:142] acquiring lock: {Name:mk818d33e162ca33774e3ab05f6aac30f8feaf64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:30:42.768316   40325 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21835-5912/kubeconfig
	I1101 09:30:42.769771   40325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5912/kubeconfig: {Name:mk599bec02e6b7062c3926243176124a4bc71dfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:30:42.770082   40325 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.50.183 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:30:42.770416   40325 config.go:182] Loaded profile config "pause-855890": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:30:42.770471   40325 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:30:42.771785   40325 out.go:179] * Enabled addons: 
	I1101 09:30:42.771802   40325 out.go:179] * Verifying Kubernetes components...
	I1101 09:30:42.731367   36048 api_server.go:269] stopped: https://192.168.39.77:8443/healthz: Get "https://192.168.39.77:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1101 09:30:42.731431   36048 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:30:42.731507   36048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:30:42.790399   36048 cri.go:89] found id: "853cf39dd0a48500a7f095970a87b6422244b73af3e001ff3d3cd7b5347125a4"
	I1101 09:30:42.790423   36048 cri.go:89] found id: "223e8bd47a8009d1421af6bceab47d9d178a2c4d9905527eccdc3ff8e8c25ed9"
	I1101 09:30:42.790429   36048 cri.go:89] found id: ""
	I1101 09:30:42.790442   36048 logs.go:282] 2 containers: [853cf39dd0a48500a7f095970a87b6422244b73af3e001ff3d3cd7b5347125a4 223e8bd47a8009d1421af6bceab47d9d178a2c4d9905527eccdc3ff8e8c25ed9]
	I1101 09:30:42.790514   36048 ssh_runner.go:195] Run: which crictl
	I1101 09:30:42.796902   36048 ssh_runner.go:195] Run: which crictl
	I1101 09:30:42.802663   36048 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:30:42.802742   36048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:30:42.855650   36048 cri.go:89] found id: "f9dd7881e6bb63603f4ed3e4da6c514d13e60e1ead5871d13f37ae6ff8ec3f92"
	I1101 09:30:42.855679   36048 cri.go:89] found id: ""
	I1101 09:30:42.855689   36048 logs.go:282] 1 containers: [f9dd7881e6bb63603f4ed3e4da6c514d13e60e1ead5871d13f37ae6ff8ec3f92]
	I1101 09:30:42.855754   36048 ssh_runner.go:195] Run: which crictl
	I1101 09:30:42.862061   36048 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:30:42.862153   36048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:30:42.907354   36048 cri.go:89] found id: "829a7049ca8686e23aa5a0266e3f8e6b74f1e010bd31bb273affd0bf84d915be"
	I1101 09:30:42.907378   36048 cri.go:89] found id: ""
	I1101 09:30:42.907388   36048 logs.go:282] 1 containers: [829a7049ca8686e23aa5a0266e3f8e6b74f1e010bd31bb273affd0bf84d915be]
	I1101 09:30:42.907448   36048 ssh_runner.go:195] Run: which crictl
	I1101 09:30:42.912553   36048 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:30:42.912627   36048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:30:42.963362   36048 cri.go:89] found id: "a8d9bf8913ecbdb49a1e5d6e4c23fecc667057073231a06fe375be2a99eb0634"
	I1101 09:30:42.963388   36048 cri.go:89] found id: "4a827f9d08c79b703503f3602ae37b95dc8f688f3ef925017a1e492167c41558"
	I1101 09:30:42.963393   36048 cri.go:89] found id: ""
	I1101 09:30:42.963402   36048 logs.go:282] 2 containers: [a8d9bf8913ecbdb49a1e5d6e4c23fecc667057073231a06fe375be2a99eb0634 4a827f9d08c79b703503f3602ae37b95dc8f688f3ef925017a1e492167c41558]
	I1101 09:30:42.963463   36048 ssh_runner.go:195] Run: which crictl
	I1101 09:30:42.969033   36048 ssh_runner.go:195] Run: which crictl
	I1101 09:30:42.973483   36048 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:30:42.973552   36048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:30:43.024410   36048 cri.go:89] found id: "c7f0148e52e4924e35347f63207cf72eee31c760c186643e54000ca0a18006b6"
	I1101 09:30:43.024436   36048 cri.go:89] found id: ""
	I1101 09:30:43.024446   36048 logs.go:282] 1 containers: [c7f0148e52e4924e35347f63207cf72eee31c760c186643e54000ca0a18006b6]
	I1101 09:30:43.024509   36048 ssh_runner.go:195] Run: which crictl
	I1101 09:30:43.029093   36048 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:30:43.029172   36048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:30:43.073486   36048 cri.go:89] found id: "936fd2ed292ea158a66e321e5e5c589755deca7d69ab01e0bbea8be585b766ea"
	I1101 09:30:43.073511   36048 cri.go:89] found id: ""
	I1101 09:30:43.073521   36048 logs.go:282] 1 containers: [936fd2ed292ea158a66e321e5e5c589755deca7d69ab01e0bbea8be585b766ea]
	I1101 09:30:43.073585   36048 ssh_runner.go:195] Run: which crictl
	I1101 09:30:43.079231   36048 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:30:43.079305   36048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:30:43.133027   36048 cri.go:89] found id: ""
	I1101 09:30:43.133057   36048 logs.go:282] 0 containers: []
	W1101 09:30:43.133068   36048 logs.go:284] No container was found matching "kindnet"
	I1101 09:30:43.133077   36048 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:30:43.133153   36048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:30:43.177195   36048 cri.go:89] found id: "705a325c05b6c0085e7ff8b3e258e47d9fe8412452bed3638031e0f6c545de9d"
	I1101 09:30:43.177243   36048 cri.go:89] found id: ""
	I1101 09:30:43.177253   36048 logs.go:282] 1 containers: [705a325c05b6c0085e7ff8b3e258e47d9fe8412452bed3638031e0f6c545de9d]
	I1101 09:30:43.177340   36048 ssh_runner.go:195] Run: which crictl
	I1101 09:30:43.182098   36048 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:30:43.182128   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1101 09:30:42.773253   40325 addons.go:515] duration metric: took 2.783608ms for enable addons: enabled=[]
	I1101 09:30:42.773294   40325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:30:43.018240   40325 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:30:43.047023   40325 node_ready.go:35] waiting up to 6m0s for node "pause-855890" to be "Ready" ...
	I1101 09:30:43.051015   40325 node_ready.go:49] node "pause-855890" is "Ready"
	I1101 09:30:43.051065   40325 node_ready.go:38] duration metric: took 3.965977ms for node "pause-855890" to be "Ready" ...
	I1101 09:30:43.051088   40325 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:30:43.051148   40325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:30:43.078816   40325 api_server.go:72] duration metric: took 308.69377ms to wait for apiserver process to appear ...
	I1101 09:30:43.078846   40325 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:30:43.078870   40325 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I1101 09:30:43.086353   40325 api_server.go:279] https://192.168.50.183:8443/healthz returned 200:
	ok
	I1101 09:30:43.087710   40325 api_server.go:141] control plane version: v1.34.1
	I1101 09:30:43.087733   40325 api_server.go:131] duration metric: took 8.879713ms to wait for apiserver health ...
	I1101 09:30:43.087744   40325 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:30:43.091519   40325 system_pods.go:59] 6 kube-system pods found
	I1101 09:30:43.091548   40325 system_pods.go:61] "coredns-66bc5c9577-czz5l" [0464c9e4-46a6-477e-94b6-fed9a6eb2966] Running
	I1101 09:30:43.091560   40325 system_pods.go:61] "etcd-pause-855890" [229a3488-3c8b-45a2-8535-95dd1be1dcc0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:30:43.091569   40325 system_pods.go:61] "kube-apiserver-pause-855890" [f6e98a8b-9606-43e9-a2ee-cae88a417568] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:30:43.091580   40325 system_pods.go:61] "kube-controller-manager-pause-855890" [41c64391-6367-45d0-af32-82b61b2e385f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:30:43.091588   40325 system_pods.go:61] "kube-proxy-9dngv" [74e9d2ed-3e06-4b92-b71e-0d3520d7d64b] Running
	I1101 09:30:43.091607   40325 system_pods.go:61] "kube-scheduler-pause-855890" [471d7f99-bc78-44a6-9ab3-0004d3b1fd4d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:30:43.091615   40325 system_pods.go:74] duration metric: took 3.864338ms to wait for pod list to return data ...
	I1101 09:30:43.091623   40325 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:30:43.096713   40325 default_sa.go:45] found service account: "default"
	I1101 09:30:43.096741   40325 default_sa.go:55] duration metric: took 5.110825ms for default service account to be created ...
	I1101 09:30:43.096754   40325 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:30:43.101053   40325 system_pods.go:86] 6 kube-system pods found
	I1101 09:30:43.101091   40325 system_pods.go:89] "coredns-66bc5c9577-czz5l" [0464c9e4-46a6-477e-94b6-fed9a6eb2966] Running
	I1101 09:30:43.101104   40325 system_pods.go:89] "etcd-pause-855890" [229a3488-3c8b-45a2-8535-95dd1be1dcc0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:30:43.101114   40325 system_pods.go:89] "kube-apiserver-pause-855890" [f6e98a8b-9606-43e9-a2ee-cae88a417568] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:30:43.101127   40325 system_pods.go:89] "kube-controller-manager-pause-855890" [41c64391-6367-45d0-af32-82b61b2e385f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:30:43.101134   40325 system_pods.go:89] "kube-proxy-9dngv" [74e9d2ed-3e06-4b92-b71e-0d3520d7d64b] Running
	I1101 09:30:43.101143   40325 system_pods.go:89] "kube-scheduler-pause-855890" [471d7f99-bc78-44a6-9ab3-0004d3b1fd4d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:30:43.101152   40325 system_pods.go:126] duration metric: took 4.390473ms to wait for k8s-apps to be running ...
	I1101 09:30:43.101163   40325 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:30:43.101236   40325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:30:43.126362   40325 system_svc.go:56] duration metric: took 25.189413ms WaitForService to wait for kubelet
	I1101 09:30:43.126392   40325 kubeadm.go:587] duration metric: took 356.274473ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:30:43.126412   40325 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:30:43.134163   40325 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 09:30:43.134195   40325 node_conditions.go:123] node cpu capacity is 2
	I1101 09:30:43.134233   40325 node_conditions.go:105] duration metric: took 7.814319ms to run NodePressure ...
	I1101 09:30:43.134250   40325 start.go:242] waiting for startup goroutines ...
	I1101 09:30:43.134264   40325 start.go:247] waiting for cluster config update ...
	I1101 09:30:43.134276   40325 start.go:256] writing updated cluster config ...
	I1101 09:30:43.134686   40325 ssh_runner.go:195] Run: rm -f paused
	I1101 09:30:43.141856   40325 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:30:43.142801   40325 kapi.go:59] client config for pause-855890: &rest.Config{Host:"https://192.168.50.183:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21835-5912/.minikube/profiles/pause-855890/client.crt", KeyFile:"/home/jenkins/minikube-integration/21835-5912/.minikube/profiles/pause-855890/client.key", CAFile:"/home/jenkins/minikube-integration/21835-5912/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 09:30:43.146172   40325 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-czz5l" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:30:43.151050   40325 pod_ready.go:94] pod "coredns-66bc5c9577-czz5l" is "Ready"
	I1101 09:30:43.151078   40325 pod_ready.go:86] duration metric: took 4.878579ms for pod "coredns-66bc5c9577-czz5l" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:30:43.153331   40325 pod_ready.go:83] waiting for pod "etcd-pause-855890" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 09:30:45.159744   40325 pod_ready.go:104] pod "etcd-pause-855890" is not "Ready", error: <nil>
	I1101 09:30:42.225454   40614 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1101 09:30:42.225584   40614 start.go:159] libmachine.API.Create for "cert-options-414547" (driver="kvm2")
	I1101 09:30:42.225623   40614 client.go:173] LocalClient.Create starting
	I1101 09:30:42.225680   40614 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca.pem
	I1101 09:30:42.225707   40614 main.go:143] libmachine: Decoding PEM data...
	I1101 09:30:42.225720   40614 main.go:143] libmachine: Parsing certificate...
	I1101 09:30:42.225775   40614 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21835-5912/.minikube/certs/cert.pem
	I1101 09:30:42.225790   40614 main.go:143] libmachine: Decoding PEM data...
	I1101 09:30:42.225805   40614 main.go:143] libmachine: Parsing certificate...
	I1101 09:30:42.226149   40614 main.go:143] libmachine: creating domain...
	I1101 09:30:42.226154   40614 main.go:143] libmachine: creating network...
	I1101 09:30:42.227658   40614 main.go:143] libmachine: found existing default network
	I1101 09:30:42.227851   40614 main.go:143] libmachine: <network connections='4'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 09:30:42.228644   40614 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:98:e9:70} reservation:<nil>}
	I1101 09:30:42.229512   40614 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:fe:b9:8d} reservation:<nil>}
	I1101 09:30:42.230460   40614 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:de:c4:3e} reservation:<nil>}
	I1101 09:30:42.231629   40614 network.go:211] skipping subnet 192.168.72.0/24 that is taken: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.72.1 IfaceMTU:1500 IfaceMAC:52:54:00:72:53:ad} reservation:<nil>}
	I1101 09:30:42.232956   40614 network.go:206] using free private subnet 192.168.83.0/24: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e3a840}
	I1101 09:30:42.233044   40614 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-cert-options-414547</name>
	  <dns enable='no'/>
	  <ip address='192.168.83.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.83.2' end='192.168.83.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 09:30:42.239237   40614 main.go:143] libmachine: creating private network mk-cert-options-414547 192.168.83.0/24...
	I1101 09:30:42.321104   40614 main.go:143] libmachine: private network mk-cert-options-414547 192.168.83.0/24 created
	I1101 09:30:42.321380   40614 main.go:143] libmachine: <network>
	  <name>mk-cert-options-414547</name>
	  <uuid>cd7f132d-8dcf-4703-ac4f-189f6b585de9</uuid>
	  <bridge name='virbr5' stp='on' delay='0'/>
	  <mac address='52:54:00:49:68:5d'/>
	  <dns enable='no'/>
	  <ip address='192.168.83.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.83.2' end='192.168.83.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 09:30:42.321416   40614 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21835-5912/.minikube/machines/cert-options-414547 ...
	I1101 09:30:42.321441   40614 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21835-5912/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso
	I1101 09:30:42.321447   40614 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21835-5912/.minikube
	I1101 09:30:42.321556   40614 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21835-5912/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21835-5912/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso...
	I1101 09:30:42.556808   40614 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21835-5912/.minikube/machines/cert-options-414547/id_rsa...
	I1101 09:30:42.894999   40614 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21835-5912/.minikube/machines/cert-options-414547/cert-options-414547.rawdisk...
	I1101 09:30:42.895041   40614 main.go:143] libmachine: Writing magic tar header
	I1101 09:30:42.895101   40614 main.go:143] libmachine: Writing SSH key tar header
	I1101 09:30:42.895202   40614 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21835-5912/.minikube/machines/cert-options-414547 ...
	I1101 09:30:42.895305   40614 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21835-5912/.minikube/machines/cert-options-414547
	I1101 09:30:42.895341   40614 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21835-5912/.minikube/machines/cert-options-414547 (perms=drwx------)
	I1101 09:30:42.895359   40614 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21835-5912/.minikube/machines
	I1101 09:30:42.895373   40614 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21835-5912/.minikube/machines (perms=drwxr-xr-x)
	I1101 09:30:42.895387   40614 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21835-5912/.minikube
	I1101 09:30:42.895400   40614 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21835-5912/.minikube (perms=drwxr-xr-x)
	I1101 09:30:42.895411   40614 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21835-5912
	I1101 09:30:42.895423   40614 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21835-5912 (perms=drwxrwxr-x)
	I1101 09:30:42.895436   40614 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1101 09:30:42.895447   40614 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1101 09:30:42.895456   40614 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1101 09:30:42.895465   40614 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1101 09:30:42.895476   40614 main.go:143] libmachine: checking permissions on dir: /home
	I1101 09:30:42.895484   40614 main.go:143] libmachine: skipping /home - not owner
	I1101 09:30:42.895489   40614 main.go:143] libmachine: defining domain...
	I1101 09:30:42.896906   40614 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>cert-options-414547</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21835-5912/.minikube/machines/cert-options-414547/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21835-5912/.minikube/machines/cert-options-414547/cert-options-414547.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-cert-options-414547'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1101 09:30:42.905632   40614 main.go:143] libmachine: domain cert-options-414547 has defined MAC address 52:54:00:df:49:94 in network default
	I1101 09:30:42.906411   40614 main.go:143] libmachine: domain cert-options-414547 has defined MAC address 52:54:00:1f:e9:df in network mk-cert-options-414547
	I1101 09:30:42.906424   40614 main.go:143] libmachine: starting domain...
	I1101 09:30:42.906427   40614 main.go:143] libmachine: ensuring networks are active...
	I1101 09:30:42.907546   40614 main.go:143] libmachine: Ensuring network default is active
	I1101 09:30:42.908169   40614 main.go:143] libmachine: Ensuring network mk-cert-options-414547 is active
	I1101 09:30:42.909245   40614 main.go:143] libmachine: getting domain XML...
	I1101 09:30:42.910637   40614 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>cert-options-414547</name>
	  <uuid>d1ca6388-16c4-4754-acbe-6531e17fb0b8</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21835-5912/.minikube/machines/cert-options-414547/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21835-5912/.minikube/machines/cert-options-414547/cert-options-414547.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:1f:e9:df'/>
	      <source network='mk-cert-options-414547'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:df:49:94'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1101 09:30:44.330339   40614 main.go:143] libmachine: waiting for domain to start...
	I1101 09:30:44.331743   40614 main.go:143] libmachine: domain is now running
	I1101 09:30:44.331751   40614 main.go:143] libmachine: waiting for IP...
	I1101 09:30:44.332517   40614 main.go:143] libmachine: domain cert-options-414547 has defined MAC address 52:54:00:1f:e9:df in network mk-cert-options-414547
	I1101 09:30:44.333016   40614 main.go:143] libmachine: no network interface addresses found for domain cert-options-414547 (source=lease)
	I1101 09:30:44.333020   40614 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:30:44.333337   40614 main.go:143] libmachine: unable to find current IP address of domain cert-options-414547 in network mk-cert-options-414547 (interfaces detected: [])
	I1101 09:30:44.333366   40614 retry.go:31] will retry after 188.233686ms: waiting for domain to come up
	I1101 09:30:44.523749   40614 main.go:143] libmachine: domain cert-options-414547 has defined MAC address 52:54:00:1f:e9:df in network mk-cert-options-414547
	I1101 09:30:44.524515   40614 main.go:143] libmachine: no network interface addresses found for domain cert-options-414547 (source=lease)
	I1101 09:30:44.524523   40614 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:30:44.524940   40614 main.go:143] libmachine: unable to find current IP address of domain cert-options-414547 in network mk-cert-options-414547 (interfaces detected: [])
	I1101 09:30:44.524973   40614 retry.go:31] will retry after 388.892821ms: waiting for domain to come up
	I1101 09:30:44.915566   40614 main.go:143] libmachine: domain cert-options-414547 has defined MAC address 52:54:00:1f:e9:df in network mk-cert-options-414547
	I1101 09:30:44.916321   40614 main.go:143] libmachine: no network interface addresses found for domain cert-options-414547 (source=lease)
	I1101 09:30:44.916332   40614 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:30:44.916667   40614 main.go:143] libmachine: unable to find current IP address of domain cert-options-414547 in network mk-cert-options-414547 (interfaces detected: [])
	I1101 09:30:44.916697   40614 retry.go:31] will retry after 379.270751ms: waiting for domain to come up
	I1101 09:30:45.297026   40614 main.go:143] libmachine: domain cert-options-414547 has defined MAC address 52:54:00:1f:e9:df in network mk-cert-options-414547
	I1101 09:30:45.297779   40614 main.go:143] libmachine: no network interface addresses found for domain cert-options-414547 (source=lease)
	I1101 09:30:45.297790   40614 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:30:45.298189   40614 main.go:143] libmachine: unable to find current IP address of domain cert-options-414547 in network mk-cert-options-414547 (interfaces detected: [])
	I1101 09:30:45.298233   40614 retry.go:31] will retry after 467.831668ms: waiting for domain to come up
	I1101 09:30:45.767980   40614 main.go:143] libmachine: domain cert-options-414547 has defined MAC address 52:54:00:1f:e9:df in network mk-cert-options-414547
	I1101 09:30:45.768588   40614 main.go:143] libmachine: no network interface addresses found for domain cert-options-414547 (source=lease)
	I1101 09:30:45.768597   40614 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:30:45.768985   40614 main.go:143] libmachine: unable to find current IP address of domain cert-options-414547 in network mk-cert-options-414547 (interfaces detected: [])
	I1101 09:30:45.769029   40614 retry.go:31] will retry after 588.768021ms: waiting for domain to come up
	I1101 09:30:46.359958   40614 main.go:143] libmachine: domain cert-options-414547 has defined MAC address 52:54:00:1f:e9:df in network mk-cert-options-414547
	I1101 09:30:46.360695   40614 main.go:143] libmachine: no network interface addresses found for domain cert-options-414547 (source=lease)
	I1101 09:30:46.360704   40614 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:30:46.361095   40614 main.go:143] libmachine: unable to find current IP address of domain cert-options-414547 in network mk-cert-options-414547 (interfaces detected: [])
	I1101 09:30:46.361126   40614 retry.go:31] will retry after 832.145632ms: waiting for domain to come up
	I1101 09:30:48.761156   36048 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (5.57900129s)
	W1101 09:30:48.761254   36048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp 127.0.0.1:8443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:36246->127.0.0.1:8443: read: connection reset by peer
	 output: 
	** stderr ** 
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp 127.0.0.1:8443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:36246->127.0.0.1:8443: read: connection reset by peer
	
	** /stderr **
	I1101 09:30:48.761271   36048 logs.go:123] Gathering logs for etcd [f9dd7881e6bb63603f4ed3e4da6c514d13e60e1ead5871d13f37ae6ff8ec3f92] ...
	I1101 09:30:48.761287   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9dd7881e6bb63603f4ed3e4da6c514d13e60e1ead5871d13f37ae6ff8ec3f92"
	I1101 09:30:48.823695   36048 logs.go:123] Gathering logs for coredns [829a7049ca8686e23aa5a0266e3f8e6b74f1e010bd31bb273affd0bf84d915be] ...
	I1101 09:30:48.823746   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 829a7049ca8686e23aa5a0266e3f8e6b74f1e010bd31bb273affd0bf84d915be"
	I1101 09:30:48.865957   36048 logs.go:123] Gathering logs for kube-scheduler [a8d9bf8913ecbdb49a1e5d6e4c23fecc667057073231a06fe375be2a99eb0634] ...
	I1101 09:30:48.865993   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8d9bf8913ecbdb49a1e5d6e4c23fecc667057073231a06fe375be2a99eb0634"
	I1101 09:30:48.927443   36048 logs.go:123] Gathering logs for kube-scheduler [4a827f9d08c79b703503f3602ae37b95dc8f688f3ef925017a1e492167c41558] ...
	I1101 09:30:48.927484   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a827f9d08c79b703503f3602ae37b95dc8f688f3ef925017a1e492167c41558"
	I1101 09:30:48.975530   36048 logs.go:123] Gathering logs for kube-proxy [c7f0148e52e4924e35347f63207cf72eee31c760c186643e54000ca0a18006b6] ...
	I1101 09:30:48.975560   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7f0148e52e4924e35347f63207cf72eee31c760c186643e54000ca0a18006b6"
	I1101 09:30:49.022574   36048 logs.go:123] Gathering logs for kube-apiserver [853cf39dd0a48500a7f095970a87b6422244b73af3e001ff3d3cd7b5347125a4] ...
	I1101 09:30:49.022605   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 853cf39dd0a48500a7f095970a87b6422244b73af3e001ff3d3cd7b5347125a4"
	I1101 09:30:49.070667   36048 logs.go:123] Gathering logs for kube-apiserver [223e8bd47a8009d1421af6bceab47d9d178a2c4d9905527eccdc3ff8e8c25ed9] ...
	I1101 09:30:49.070702   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 223e8bd47a8009d1421af6bceab47d9d178a2c4d9905527eccdc3ff8e8c25ed9"
	W1101 09:30:49.113734   36048 logs.go:130] failed kube-apiserver [223e8bd47a8009d1421af6bceab47d9d178a2c4d9905527eccdc3ff8e8c25ed9]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 223e8bd47a8009d1421af6bceab47d9d178a2c4d9905527eccdc3ff8e8c25ed9" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 223e8bd47a8009d1421af6bceab47d9d178a2c4d9905527eccdc3ff8e8c25ed9": Process exited with status 1
	stdout:
	
	stderr:
	E1101 09:30:49.103774    4290 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"223e8bd47a8009d1421af6bceab47d9d178a2c4d9905527eccdc3ff8e8c25ed9\": container with ID starting with 223e8bd47a8009d1421af6bceab47d9d178a2c4d9905527eccdc3ff8e8c25ed9 not found: ID does not exist" containerID="223e8bd47a8009d1421af6bceab47d9d178a2c4d9905527eccdc3ff8e8c25ed9"
	time="2025-11-01T09:30:49Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"223e8bd47a8009d1421af6bceab47d9d178a2c4d9905527eccdc3ff8e8c25ed9\": container with ID starting with 223e8bd47a8009d1421af6bceab47d9d178a2c4d9905527eccdc3ff8e8c25ed9 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1101 09:30:49.103774    4290 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"223e8bd47a8009d1421af6bceab47d9d178a2c4d9905527eccdc3ff8e8c25ed9\": container with ID starting with 223e8bd47a8009d1421af6bceab47d9d178a2c4d9905527eccdc3ff8e8c25ed9 not found: ID does not exist" containerID="223e8bd47a8009d1421af6bceab47d9d178a2c4d9905527eccdc3ff8e8c25ed9"
	time="2025-11-01T09:30:49Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"223e8bd47a8009d1421af6bceab47d9d178a2c4d9905527eccdc3ff8e8c25ed9\": container with ID starting with 223e8bd47a8009d1421af6bceab47d9d178a2c4d9905527eccdc3ff8e8c25ed9 not found: ID does not exist"
	
	** /stderr **
	I1101 09:30:49.113755   36048 logs.go:123] Gathering logs for kube-controller-manager [936fd2ed292ea158a66e321e5e5c589755deca7d69ab01e0bbea8be585b766ea] ...
	I1101 09:30:49.113768   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 936fd2ed292ea158a66e321e5e5c589755deca7d69ab01e0bbea8be585b766ea"
	I1101 09:30:49.173991   36048 logs.go:123] Gathering logs for storage-provisioner [705a325c05b6c0085e7ff8b3e258e47d9fe8412452bed3638031e0f6c545de9d] ...
	I1101 09:30:49.174034   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 705a325c05b6c0085e7ff8b3e258e47d9fe8412452bed3638031e0f6c545de9d"
	I1101 09:30:49.216102   36048 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:30:49.216134   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1101 09:30:47.160551   40325 pod_ready.go:104] pod "etcd-pause-855890" is not "Ready", error: <nil>
	W1101 09:30:49.161830   40325 pod_ready.go:104] pod "etcd-pause-855890" is not "Ready", error: <nil>
	I1101 09:30:47.194506   40614 main.go:143] libmachine: domain cert-options-414547 has defined MAC address 52:54:00:1f:e9:df in network mk-cert-options-414547
	I1101 09:30:47.195237   40614 main.go:143] libmachine: no network interface addresses found for domain cert-options-414547 (source=lease)
	I1101 09:30:47.195246   40614 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:30:47.195666   40614 main.go:143] libmachine: unable to find current IP address of domain cert-options-414547 in network mk-cert-options-414547 (interfaces detected: [])
	I1101 09:30:47.195694   40614 retry.go:31] will retry after 910.702815ms: waiting for domain to come up
	I1101 09:30:48.107847   40614 main.go:143] libmachine: domain cert-options-414547 has defined MAC address 52:54:00:1f:e9:df in network mk-cert-options-414547
	I1101 09:30:48.108510   40614 main.go:143] libmachine: no network interface addresses found for domain cert-options-414547 (source=lease)
	I1101 09:30:48.108517   40614 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:30:48.108783   40614 main.go:143] libmachine: unable to find current IP address of domain cert-options-414547 in network mk-cert-options-414547 (interfaces detected: [])
	I1101 09:30:48.108807   40614 retry.go:31] will retry after 1.132074356s: waiting for domain to come up
	I1101 09:30:49.242573   40614 main.go:143] libmachine: domain cert-options-414547 has defined MAC address 52:54:00:1f:e9:df in network mk-cert-options-414547
	I1101 09:30:49.243368   40614 main.go:143] libmachine: no network interface addresses found for domain cert-options-414547 (source=lease)
	I1101 09:30:49.243379   40614 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:30:49.243746   40614 main.go:143] libmachine: unable to find current IP address of domain cert-options-414547 in network mk-cert-options-414547 (interfaces detected: [])
	I1101 09:30:49.243795   40614 retry.go:31] will retry after 1.592675697s: waiting for domain to come up
	I1101 09:30:50.837893   40614 main.go:143] libmachine: domain cert-options-414547 has defined MAC address 52:54:00:1f:e9:df in network mk-cert-options-414547
	I1101 09:30:50.838604   40614 main.go:143] libmachine: no network interface addresses found for domain cert-options-414547 (source=lease)
	I1101 09:30:50.838613   40614 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:30:50.838934   40614 main.go:143] libmachine: unable to find current IP address of domain cert-options-414547 in network mk-cert-options-414547 (interfaces detected: [])
	I1101 09:30:50.838959   40614 retry.go:31] will retry after 1.669106824s: waiting for domain to come up
	I1101 09:30:49.615621   36048 logs.go:123] Gathering logs for container status ...
	I1101 09:30:49.615657   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 09:30:49.669953   36048 logs.go:123] Gathering logs for kubelet ...
	I1101 09:30:49.669981   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 09:30:49.769907   36048 logs.go:123] Gathering logs for dmesg ...
	I1101 09:30:49.769956   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:30:52.289438   36048 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8443/healthz ...
	I1101 09:30:52.290253   36048 api_server.go:269] stopped: https://192.168.39.77:8443/healthz: Get "https://192.168.39.77:8443/healthz": dial tcp 192.168.39.77:8443: connect: connection refused
	I1101 09:30:52.290320   36048 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:30:52.290385   36048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:30:52.341979   36048 cri.go:89] found id: "853cf39dd0a48500a7f095970a87b6422244b73af3e001ff3d3cd7b5347125a4"
	I1101 09:30:52.342012   36048 cri.go:89] found id: ""
	I1101 09:30:52.342021   36048 logs.go:282] 1 containers: [853cf39dd0a48500a7f095970a87b6422244b73af3e001ff3d3cd7b5347125a4]
	I1101 09:30:52.342100   36048 ssh_runner.go:195] Run: which crictl
	I1101 09:30:52.348841   36048 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:30:52.348934   36048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:30:52.391958   36048 cri.go:89] found id: "f9dd7881e6bb63603f4ed3e4da6c514d13e60e1ead5871d13f37ae6ff8ec3f92"
	I1101 09:30:52.391988   36048 cri.go:89] found id: ""
	I1101 09:30:52.391999   36048 logs.go:282] 1 containers: [f9dd7881e6bb63603f4ed3e4da6c514d13e60e1ead5871d13f37ae6ff8ec3f92]
	I1101 09:30:52.392073   36048 ssh_runner.go:195] Run: which crictl
	I1101 09:30:52.397045   36048 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:30:52.397144   36048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:30:52.452514   36048 cri.go:89] found id: "829a7049ca8686e23aa5a0266e3f8e6b74f1e010bd31bb273affd0bf84d915be"
	I1101 09:30:52.452544   36048 cri.go:89] found id: ""
	I1101 09:30:52.452555   36048 logs.go:282] 1 containers: [829a7049ca8686e23aa5a0266e3f8e6b74f1e010bd31bb273affd0bf84d915be]
	I1101 09:30:52.452617   36048 ssh_runner.go:195] Run: which crictl
	I1101 09:30:52.457241   36048 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:30:52.457338   36048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:30:52.506045   36048 cri.go:89] found id: "a8d9bf8913ecbdb49a1e5d6e4c23fecc667057073231a06fe375be2a99eb0634"
	I1101 09:30:52.506071   36048 cri.go:89] found id: "4a827f9d08c79b703503f3602ae37b95dc8f688f3ef925017a1e492167c41558"
	I1101 09:30:52.506077   36048 cri.go:89] found id: ""
	I1101 09:30:52.506098   36048 logs.go:282] 2 containers: [a8d9bf8913ecbdb49a1e5d6e4c23fecc667057073231a06fe375be2a99eb0634 4a827f9d08c79b703503f3602ae37b95dc8f688f3ef925017a1e492167c41558]
	I1101 09:30:52.506165   36048 ssh_runner.go:195] Run: which crictl
	I1101 09:30:52.511265   36048 ssh_runner.go:195] Run: which crictl
	I1101 09:30:52.515940   36048 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:30:52.516021   36048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:30:52.559510   36048 cri.go:89] found id: "c7f0148e52e4924e35347f63207cf72eee31c760c186643e54000ca0a18006b6"
	I1101 09:30:52.559536   36048 cri.go:89] found id: ""
	I1101 09:30:52.559548   36048 logs.go:282] 1 containers: [c7f0148e52e4924e35347f63207cf72eee31c760c186643e54000ca0a18006b6]
	I1101 09:30:52.559618   36048 ssh_runner.go:195] Run: which crictl
	I1101 09:30:52.564486   36048 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:30:52.564567   36048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:30:52.614947   36048 cri.go:89] found id: "936fd2ed292ea158a66e321e5e5c589755deca7d69ab01e0bbea8be585b766ea"
	I1101 09:30:52.614978   36048 cri.go:89] found id: ""
	I1101 09:30:52.614989   36048 logs.go:282] 1 containers: [936fd2ed292ea158a66e321e5e5c589755deca7d69ab01e0bbea8be585b766ea]
	I1101 09:30:52.615057   36048 ssh_runner.go:195] Run: which crictl
	I1101 09:30:52.620382   36048 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:30:52.620474   36048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:30:52.668290   36048 cri.go:89] found id: ""
	I1101 09:30:52.668320   36048 logs.go:282] 0 containers: []
	W1101 09:30:52.668331   36048 logs.go:284] No container was found matching "kindnet"
	I1101 09:30:52.668341   36048 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:30:52.668413   36048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:30:52.713999   36048 cri.go:89] found id: "705a325c05b6c0085e7ff8b3e258e47d9fe8412452bed3638031e0f6c545de9d"
	I1101 09:30:52.714024   36048 cri.go:89] found id: ""
	I1101 09:30:52.714033   36048 logs.go:282] 1 containers: [705a325c05b6c0085e7ff8b3e258e47d9fe8412452bed3638031e0f6c545de9d]
	I1101 09:30:52.714095   36048 ssh_runner.go:195] Run: which crictl
	I1101 09:30:52.719682   36048 logs.go:123] Gathering logs for etcd [f9dd7881e6bb63603f4ed3e4da6c514d13e60e1ead5871d13f37ae6ff8ec3f92] ...
	I1101 09:30:52.719705   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9dd7881e6bb63603f4ed3e4da6c514d13e60e1ead5871d13f37ae6ff8ec3f92"
	I1101 09:30:52.787556   36048 logs.go:123] Gathering logs for coredns [829a7049ca8686e23aa5a0266e3f8e6b74f1e010bd31bb273affd0bf84d915be] ...
	I1101 09:30:52.787597   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 829a7049ca8686e23aa5a0266e3f8e6b74f1e010bd31bb273affd0bf84d915be"
	I1101 09:30:52.842542   36048 logs.go:123] Gathering logs for kube-scheduler [a8d9bf8913ecbdb49a1e5d6e4c23fecc667057073231a06fe375be2a99eb0634] ...
	I1101 09:30:52.842577   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8d9bf8913ecbdb49a1e5d6e4c23fecc667057073231a06fe375be2a99eb0634"
	I1101 09:30:52.906147   36048 logs.go:123] Gathering logs for kube-scheduler [4a827f9d08c79b703503f3602ae37b95dc8f688f3ef925017a1e492167c41558] ...
	I1101 09:30:52.906184   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a827f9d08c79b703503f3602ae37b95dc8f688f3ef925017a1e492167c41558"
	I1101 09:30:52.959353   36048 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:30:52.959387   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 09:30:53.343493   36048 logs.go:123] Gathering logs for dmesg ...
	I1101 09:30:53.343529   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:30:53.364422   36048 logs.go:123] Gathering logs for kube-proxy [c7f0148e52e4924e35347f63207cf72eee31c760c186643e54000ca0a18006b6] ...
	I1101 09:30:53.364452   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7f0148e52e4924e35347f63207cf72eee31c760c186643e54000ca0a18006b6"
	I1101 09:30:53.413696   36048 logs.go:123] Gathering logs for kube-controller-manager [936fd2ed292ea158a66e321e5e5c589755deca7d69ab01e0bbea8be585b766ea] ...
	I1101 09:30:53.413727   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 936fd2ed292ea158a66e321e5e5c589755deca7d69ab01e0bbea8be585b766ea"
	I1101 09:30:53.474763   36048 logs.go:123] Gathering logs for storage-provisioner [705a325c05b6c0085e7ff8b3e258e47d9fe8412452bed3638031e0f6c545de9d] ...
	I1101 09:30:53.474801   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 705a325c05b6c0085e7ff8b3e258e47d9fe8412452bed3638031e0f6c545de9d"
	I1101 09:30:53.529352   36048 logs.go:123] Gathering logs for container status ...
	I1101 09:30:53.529386   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 09:30:53.588413   36048 logs.go:123] Gathering logs for kubelet ...
	I1101 09:30:53.588442   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 09:30:53.683444   36048 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:30:53.683492   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 09:30:53.764520   36048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 09:30:53.764550   36048 logs.go:123] Gathering logs for kube-apiserver [853cf39dd0a48500a7f095970a87b6422244b73af3e001ff3d3cd7b5347125a4] ...
	I1101 09:30:53.764567   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 853cf39dd0a48500a7f095970a87b6422244b73af3e001ff3d3cd7b5347125a4"
	W1101 09:30:51.660037   40325 pod_ready.go:104] pod "etcd-pause-855890" is not "Ready", error: <nil>
	I1101 09:30:53.161330   40325 pod_ready.go:94] pod "etcd-pause-855890" is "Ready"
	I1101 09:30:53.161371   40325 pod_ready.go:86] duration metric: took 10.008019678s for pod "etcd-pause-855890" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:30:53.165392   40325 pod_ready.go:83] waiting for pod "kube-apiserver-pause-855890" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:30:53.172466   40325 pod_ready.go:94] pod "kube-apiserver-pause-855890" is "Ready"
	I1101 09:30:53.172495   40325 pod_ready.go:86] duration metric: took 7.072878ms for pod "kube-apiserver-pause-855890" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:30:53.175701   40325 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-855890" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:30:53.183604   40325 pod_ready.go:94] pod "kube-controller-manager-pause-855890" is "Ready"
	I1101 09:30:53.183633   40325 pod_ready.go:86] duration metric: took 7.905744ms for pod "kube-controller-manager-pause-855890" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:30:53.187754   40325 pod_ready.go:83] waiting for pod "kube-proxy-9dngv" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:30:53.358030   40325 pod_ready.go:94] pod "kube-proxy-9dngv" is "Ready"
	I1101 09:30:53.358076   40325 pod_ready.go:86] duration metric: took 170.284752ms for pod "kube-proxy-9dngv" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:30:53.557973   40325 pod_ready.go:83] waiting for pod "kube-scheduler-pause-855890" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:30:55.564590   40325 pod_ready.go:94] pod "kube-scheduler-pause-855890" is "Ready"
	I1101 09:30:55.564618   40325 pod_ready.go:86] duration metric: took 2.006613741s for pod "kube-scheduler-pause-855890" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:30:55.564630   40325 pod_ready.go:40] duration metric: took 12.422741907s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:30:55.609099   40325 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 09:30:55.610975   40325 out.go:179] * Done! kubectl is now configured to use "pause-855890" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 09:30:56 pause-855890 crio[2782]: time="2025-11-01 09:30:56.263166953Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761989456263140238,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=86addbce-88a5-4abf-a1ed-77e15bc28159 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:30:56 pause-855890 crio[2782]: time="2025-11-01 09:30:56.264028649Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d0946782-16e4-4966-a561-0ec1ee6f1197 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:30:56 pause-855890 crio[2782]: time="2025-11-01 09:30:56.264120673Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d0946782-16e4-4966-a561-0ec1ee6f1197 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:30:56 pause-855890 crio[2782]: time="2025-11-01 09:30:56.264617428Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b05a9cd351437e0fa07813290ba4db602fb5fd329e87fef8dac936cb25fbbdc5,PodSandboxId:c2bc92475d1593ddf519c895c522f57d80caf69d9da18a7c1066078044d847f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761989441575435230,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-czz5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0464c9e4-46a6-477e-94b6-fed9a6eb2966,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8842af31785cd52ebbdafa0959eae8cea4aee7dfa504f74935e64f90709ff576,PodSandboxId:7a37e79d3e8d4c0ef35a83f051f7b117d160fb95b07e731e83b50a37d14e7f60,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761989440850021913,Labels
:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91fefe28619d5b5883bc05f672528776,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa8d5c2ef0eef267589224de026d832239ec04df5fee97d3944f096c2351184,PodSandboxId:adf90291a5f3d0985ab7c580a2e1beae78e98f9c1a0ab3417500f9d17663d454,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc2517
2553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761989435967848374,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dngv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e9d2ed-3e06-4b92-b71e-0d3520d7d64b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71b0ac0166bc29fa2a4344cda6805cc8ba9089589fa84ee6e2de3bd9a26d8960,PodSandboxId:87b0274ceb976b02f4f6978ba9828b00a55cdb32f959f141e3e82d9270de8d27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108
c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761989435814747200,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853c13719832eff05dd623c8fa9733f9,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f695868c1e5050f1140aca569a60bff1c122e7424221706218b843055d4559d,PodSandboxId:4c3d559ba5fa01bc3aac60801a64bc359f8ae76726193fc6c348fbc642ffd2dc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761989435782632619,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 558b6f54d67163787c5e1ecfe5ab3e8f,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c13e07694a27604a7283ecab14599ca94c057df8e41b2c9ed9bb8fb8b083292,PodSandboxId:97776589cf2da870fec960d9f5790315b7dd3d29fa672414d44ee224ffc83a5e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd
6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761989435747651560,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 754a220d6e4e8f43883323d60c8e9daa,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e7af9f11cce68fead72b9ad0539401f7ebcdafd692c4284566fa26af1d67752,PodSandboxId:7a37e79d3e8d4c0ef35a83f051f7b117d160fb95b07e73
1e83b50a37d14e7f60,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761989435687681651,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91fefe28619d5b5883bc05f672528776,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:317a9379f524cbb808decb995783218cc7254c0808bd3cdbc03b94f7e63149ea,PodSandboxId:49c584770c098c55d794e0d0422cae69111ee9cc441aa54562e8a8a6a29542d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761989370726875613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-czz5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0464c9e4-46a6-477e-94b6-fed9a6eb2966,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\"
,\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9ddabcf5ff167d0756b588b55f5f1adea8970f8165ccf6a8500ebbc7da0692d,PodSandboxId:3be846a37f831be91223346e485bf9483fb3085e5cba6ec310c004d56a28d2c6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761989370042559423,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dngv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e9d2ed-3e06-4b92-b
71e-0d3520d7d64b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08997e7db2aca95d5ecb59b8e74931e0088d93a536271a5655c21a88adf47b67,PodSandboxId:43314a6aa6707c482ada8cf19c54fa3f0cfabbf57e068ef121b7a3bdf613b6f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761989357561676152,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 558b6f54d67163787c5e1ecfe5ab3e8f,},Annotations:map[string]string{
io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe4e98bd2bc2d934f1cf0670af24bc144ea2818c8501b0956e2a6840cec5315b,PodSandboxId:0ee04d640f242b1aa4baed65c7026f00ff1865df3a005bb54e2f998520f2881e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761989357544814666,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-855890,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 754a220d6e4e8f43883323d60c8e9daa,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fecfbd390ded36fbce68790e962191e98ce02780cad561cb62f869a589724115,PodSandboxId:7631e8b0c6a1306cbd41fc71db442dd5cac3573abbdc7ae08758d607584b5de3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761989357502177160,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853c13719832eff05dd623c8fa9733f9,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d0946782-16e4-4966-a561-0ec1ee6f1197 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:30:56 pause-855890 crio[2782]: time="2025-11-01 09:30:56.314483940Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7dd070ad-a493-4e24-bd6c-b84750f6cc25 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:30:56 pause-855890 crio[2782]: time="2025-11-01 09:30:56.314639072Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7dd070ad-a493-4e24-bd6c-b84750f6cc25 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:30:56 pause-855890 crio[2782]: time="2025-11-01 09:30:56.316025928Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d1188d36-b684-45a8-8172-1771c3f8ed43 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:30:56 pause-855890 crio[2782]: time="2025-11-01 09:30:56.317448828Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761989456317328939,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d1188d36-b684-45a8-8172-1771c3f8ed43 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:30:56 pause-855890 crio[2782]: time="2025-11-01 09:30:56.318497058Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c9d6c969-c28f-48ba-a154-ed49a3d224e3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:30:56 pause-855890 crio[2782]: time="2025-11-01 09:30:56.318821202Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c9d6c969-c28f-48ba-a154-ed49a3d224e3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:30:56 pause-855890 crio[2782]: time="2025-11-01 09:30:56.319467848Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b05a9cd351437e0fa07813290ba4db602fb5fd329e87fef8dac936cb25fbbdc5,PodSandboxId:c2bc92475d1593ddf519c895c522f57d80caf69d9da18a7c1066078044d847f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761989441575435230,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-czz5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0464c9e4-46a6-477e-94b6-fed9a6eb2966,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8842af31785cd52ebbdafa0959eae8cea4aee7dfa504f74935e64f90709ff576,PodSandboxId:7a37e79d3e8d4c0ef35a83f051f7b117d160fb95b07e731e83b50a37d14e7f60,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761989440850021913,Labels
:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91fefe28619d5b5883bc05f672528776,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa8d5c2ef0eef267589224de026d832239ec04df5fee97d3944f096c2351184,PodSandboxId:adf90291a5f3d0985ab7c580a2e1beae78e98f9c1a0ab3417500f9d17663d454,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc2517
2553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761989435967848374,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dngv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e9d2ed-3e06-4b92-b71e-0d3520d7d64b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71b0ac0166bc29fa2a4344cda6805cc8ba9089589fa84ee6e2de3bd9a26d8960,PodSandboxId:87b0274ceb976b02f4f6978ba9828b00a55cdb32f959f141e3e82d9270de8d27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108
c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761989435814747200,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853c13719832eff05dd623c8fa9733f9,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f695868c1e5050f1140aca569a60bff1c122e7424221706218b843055d4559d,PodSandboxId:4c3d559ba5fa01bc3aac60801a64bc359f8ae76726193fc6c348fbc642ffd2dc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761989435782632619,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 558b6f54d67163787c5e1ecfe5ab3e8f,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c13e07694a27604a7283ecab14599ca94c057df8e41b2c9ed9bb8fb8b083292,PodSandboxId:97776589cf2da870fec960d9f5790315b7dd3d29fa672414d44ee224ffc83a5e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd
6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761989435747651560,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 754a220d6e4e8f43883323d60c8e9daa,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e7af9f11cce68fead72b9ad0539401f7ebcdafd692c4284566fa26af1d67752,PodSandboxId:7a37e79d3e8d4c0ef35a83f051f7b117d160fb95b07e73
1e83b50a37d14e7f60,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761989435687681651,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91fefe28619d5b5883bc05f672528776,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:317a9379f524cbb808decb995783218cc7254c0808bd3cdbc03b94f7e63149ea,PodSandboxId:49c584770c098c55d794e0d0422cae69111ee9cc441aa54562e8a8a6a29542d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761989370726875613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-czz5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0464c9e4-46a6-477e-94b6-fed9a6eb2966,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\"
,\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9ddabcf5ff167d0756b588b55f5f1adea8970f8165ccf6a8500ebbc7da0692d,PodSandboxId:3be846a37f831be91223346e485bf9483fb3085e5cba6ec310c004d56a28d2c6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761989370042559423,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dngv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e9d2ed-3e06-4b92-b
71e-0d3520d7d64b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08997e7db2aca95d5ecb59b8e74931e0088d93a536271a5655c21a88adf47b67,PodSandboxId:43314a6aa6707c482ada8cf19c54fa3f0cfabbf57e068ef121b7a3bdf613b6f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761989357561676152,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 558b6f54d67163787c5e1ecfe5ab3e8f,},Annotations:map[string]string{
io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe4e98bd2bc2d934f1cf0670af24bc144ea2818c8501b0956e2a6840cec5315b,PodSandboxId:0ee04d640f242b1aa4baed65c7026f00ff1865df3a005bb54e2f998520f2881e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761989357544814666,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-855890,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 754a220d6e4e8f43883323d60c8e9daa,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fecfbd390ded36fbce68790e962191e98ce02780cad561cb62f869a589724115,PodSandboxId:7631e8b0c6a1306cbd41fc71db442dd5cac3573abbdc7ae08758d607584b5de3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761989357502177160,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853c13719832eff05dd623c8fa9733f9,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c9d6c969-c28f-48ba-a154-ed49a3d224e3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:30:56 pause-855890 crio[2782]: time="2025-11-01 09:30:56.373462505Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=69090491-bef2-4b74-8780-b9dbf2da036f name=/runtime.v1.RuntimeService/Version
	Nov 01 09:30:56 pause-855890 crio[2782]: time="2025-11-01 09:30:56.373815708Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=69090491-bef2-4b74-8780-b9dbf2da036f name=/runtime.v1.RuntimeService/Version
	Nov 01 09:30:56 pause-855890 crio[2782]: time="2025-11-01 09:30:56.377320853Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0b86b525-19d5-4882-8b7c-4902e8e4c0f9 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:30:56 pause-855890 crio[2782]: time="2025-11-01 09:30:56.377946594Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761989456377916826,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0b86b525-19d5-4882-8b7c-4902e8e4c0f9 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:30:56 pause-855890 crio[2782]: time="2025-11-01 09:30:56.378607437Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5a611975-c775-423e-b49f-547a63c200d5 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:30:56 pause-855890 crio[2782]: time="2025-11-01 09:30:56.378744687Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5a611975-c775-423e-b49f-547a63c200d5 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:30:56 pause-855890 crio[2782]: time="2025-11-01 09:30:56.379102531Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b05a9cd351437e0fa07813290ba4db602fb5fd329e87fef8dac936cb25fbbdc5,PodSandboxId:c2bc92475d1593ddf519c895c522f57d80caf69d9da18a7c1066078044d847f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761989441575435230,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-czz5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0464c9e4-46a6-477e-94b6-fed9a6eb2966,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8842af31785cd52ebbdafa0959eae8cea4aee7dfa504f74935e64f90709ff576,PodSandboxId:7a37e79d3e8d4c0ef35a83f051f7b117d160fb95b07e731e83b50a37d14e7f60,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761989440850021913,Labels
:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91fefe28619d5b5883bc05f672528776,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa8d5c2ef0eef267589224de026d832239ec04df5fee97d3944f096c2351184,PodSandboxId:adf90291a5f3d0985ab7c580a2e1beae78e98f9c1a0ab3417500f9d17663d454,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc2517
2553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761989435967848374,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dngv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e9d2ed-3e06-4b92-b71e-0d3520d7d64b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71b0ac0166bc29fa2a4344cda6805cc8ba9089589fa84ee6e2de3bd9a26d8960,PodSandboxId:87b0274ceb976b02f4f6978ba9828b00a55cdb32f959f141e3e82d9270de8d27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108
c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761989435814747200,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853c13719832eff05dd623c8fa9733f9,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f695868c1e5050f1140aca569a60bff1c122e7424221706218b843055d4559d,PodSandboxId:4c3d559ba5fa01bc3aac60801a64bc359f8ae76726193fc6c348fbc642ffd2dc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761989435782632619,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 558b6f54d67163787c5e1ecfe5ab3e8f,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c13e07694a27604a7283ecab14599ca94c057df8e41b2c9ed9bb8fb8b083292,PodSandboxId:97776589cf2da870fec960d9f5790315b7dd3d29fa672414d44ee224ffc83a5e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd
6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761989435747651560,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 754a220d6e4e8f43883323d60c8e9daa,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e7af9f11cce68fead72b9ad0539401f7ebcdafd692c4284566fa26af1d67752,PodSandboxId:7a37e79d3e8d4c0ef35a83f051f7b117d160fb95b07e73
1e83b50a37d14e7f60,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761989435687681651,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91fefe28619d5b5883bc05f672528776,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:317a9379f524cbb808decb995783218cc7254c0808bd3cdbc03b94f7e63149ea,PodSandboxId:49c584770c098c55d794e0d0422cae69111ee9cc441aa54562e8a8a6a29542d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761989370726875613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-czz5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0464c9e4-46a6-477e-94b6-fed9a6eb2966,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\"
,\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9ddabcf5ff167d0756b588b55f5f1adea8970f8165ccf6a8500ebbc7da0692d,PodSandboxId:3be846a37f831be91223346e485bf9483fb3085e5cba6ec310c004d56a28d2c6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761989370042559423,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dngv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e9d2ed-3e06-4b92-b
71e-0d3520d7d64b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08997e7db2aca95d5ecb59b8e74931e0088d93a536271a5655c21a88adf47b67,PodSandboxId:43314a6aa6707c482ada8cf19c54fa3f0cfabbf57e068ef121b7a3bdf613b6f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761989357561676152,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 558b6f54d67163787c5e1ecfe5ab3e8f,},Annotations:map[string]string{
io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe4e98bd2bc2d934f1cf0670af24bc144ea2818c8501b0956e2a6840cec5315b,PodSandboxId:0ee04d640f242b1aa4baed65c7026f00ff1865df3a005bb54e2f998520f2881e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761989357544814666,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-855890,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 754a220d6e4e8f43883323d60c8e9daa,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fecfbd390ded36fbce68790e962191e98ce02780cad561cb62f869a589724115,PodSandboxId:7631e8b0c6a1306cbd41fc71db442dd5cac3573abbdc7ae08758d607584b5de3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761989357502177160,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853c13719832eff05dd623c8fa9733f9,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5a611975-c775-423e-b49f-547a63c200d5 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:30:56 pause-855890 crio[2782]: time="2025-11-01 09:30:56.437803417Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fb0abbfa-abfa-4069-bff6-9f2cf052dfe3 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:30:56 pause-855890 crio[2782]: time="2025-11-01 09:30:56.437922532Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fb0abbfa-abfa-4069-bff6-9f2cf052dfe3 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:30:56 pause-855890 crio[2782]: time="2025-11-01 09:30:56.439978054Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5738d303-78d0-45fa-95b9-1e0ce84df067 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:30:56 pause-855890 crio[2782]: time="2025-11-01 09:30:56.440687719Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761989456440660565,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5738d303-78d0-45fa-95b9-1e0ce84df067 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:30:56 pause-855890 crio[2782]: time="2025-11-01 09:30:56.441649095Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=064bd237-ef3b-4b73-b962-c5b3f41565ac name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:30:56 pause-855890 crio[2782]: time="2025-11-01 09:30:56.441720407Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=064bd237-ef3b-4b73-b962-c5b3f41565ac name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:30:56 pause-855890 crio[2782]: time="2025-11-01 09:30:56.442047271Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b05a9cd351437e0fa07813290ba4db602fb5fd329e87fef8dac936cb25fbbdc5,PodSandboxId:c2bc92475d1593ddf519c895c522f57d80caf69d9da18a7c1066078044d847f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761989441575435230,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-czz5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0464c9e4-46a6-477e-94b6-fed9a6eb2966,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8842af31785cd52ebbdafa0959eae8cea4aee7dfa504f74935e64f90709ff576,PodSandboxId:7a37e79d3e8d4c0ef35a83f051f7b117d160fb95b07e731e83b50a37d14e7f60,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761989440850021913,Labels
:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91fefe28619d5b5883bc05f672528776,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa8d5c2ef0eef267589224de026d832239ec04df5fee97d3944f096c2351184,PodSandboxId:adf90291a5f3d0985ab7c580a2e1beae78e98f9c1a0ab3417500f9d17663d454,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc2517
2553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761989435967848374,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dngv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e9d2ed-3e06-4b92-b71e-0d3520d7d64b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71b0ac0166bc29fa2a4344cda6805cc8ba9089589fa84ee6e2de3bd9a26d8960,PodSandboxId:87b0274ceb976b02f4f6978ba9828b00a55cdb32f959f141e3e82d9270de8d27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108
c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761989435814747200,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853c13719832eff05dd623c8fa9733f9,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f695868c1e5050f1140aca569a60bff1c122e7424221706218b843055d4559d,PodSandboxId:4c3d559ba5fa01bc3aac60801a64bc359f8ae76726193fc6c348fbc642ffd2dc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761989435782632619,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 558b6f54d67163787c5e1ecfe5ab3e8f,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c13e07694a27604a7283ecab14599ca94c057df8e41b2c9ed9bb8fb8b083292,PodSandboxId:97776589cf2da870fec960d9f5790315b7dd3d29fa672414d44ee224ffc83a5e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd
6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761989435747651560,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 754a220d6e4e8f43883323d60c8e9daa,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e7af9f11cce68fead72b9ad0539401f7ebcdafd692c4284566fa26af1d67752,PodSandboxId:7a37e79d3e8d4c0ef35a83f051f7b117d160fb95b07e73
1e83b50a37d14e7f60,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761989435687681651,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91fefe28619d5b5883bc05f672528776,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:317a9379f524cbb808decb995783218cc7254c0808bd3cdbc03b94f7e63149ea,PodSandboxId:49c584770c098c55d794e0d0422cae69111ee9cc441aa54562e8a8a6a29542d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761989370726875613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-czz5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0464c9e4-46a6-477e-94b6-fed9a6eb2966,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\"
,\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9ddabcf5ff167d0756b588b55f5f1adea8970f8165ccf6a8500ebbc7da0692d,PodSandboxId:3be846a37f831be91223346e485bf9483fb3085e5cba6ec310c004d56a28d2c6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761989370042559423,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dngv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e9d2ed-3e06-4b92-b
71e-0d3520d7d64b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08997e7db2aca95d5ecb59b8e74931e0088d93a536271a5655c21a88adf47b67,PodSandboxId:43314a6aa6707c482ada8cf19c54fa3f0cfabbf57e068ef121b7a3bdf613b6f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761989357561676152,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 558b6f54d67163787c5e1ecfe5ab3e8f,},Annotations:map[string]string{
io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe4e98bd2bc2d934f1cf0670af24bc144ea2818c8501b0956e2a6840cec5315b,PodSandboxId:0ee04d640f242b1aa4baed65c7026f00ff1865df3a005bb54e2f998520f2881e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761989357544814666,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-855890,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 754a220d6e4e8f43883323d60c8e9daa,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fecfbd390ded36fbce68790e962191e98ce02780cad561cb62f869a589724115,PodSandboxId:7631e8b0c6a1306cbd41fc71db442dd5cac3573abbdc7ae08758d607584b5de3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761989357502177160,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853c13719832eff05dd623c8fa9733f9,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=064bd237-ef3b-4b73-b962-c5b3f41565ac name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	b05a9cd351437       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   14 seconds ago       Running             coredns                   1                   c2bc92475d159       coredns-66bc5c9577-czz5l
	8842af31785cd       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   15 seconds ago       Running             kube-controller-manager   2                   7a37e79d3e8d4       kube-controller-manager-pause-855890
	1aa8d5c2ef0ee       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   20 seconds ago       Running             kube-proxy                1                   adf90291a5f3d       kube-proxy-9dngv
	71b0ac0166bc2       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   20 seconds ago       Running             kube-apiserver            1                   87b0274ceb976       kube-apiserver-pause-855890
	2f695868c1e50       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   20 seconds ago       Running             etcd                      1                   4c3d559ba5fa0       etcd-pause-855890
	2c13e07694a27       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   20 seconds ago       Running             kube-scheduler            1                   97776589cf2da       kube-scheduler-pause-855890
	3e7af9f11cce6       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   20 seconds ago       Exited              kube-controller-manager   1                   7a37e79d3e8d4       kube-controller-manager-pause-855890
	317a9379f524c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   About a minute ago   Exited              coredns                   0                   49c584770c098       coredns-66bc5c9577-czz5l
	c9ddabcf5ff16       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   About a minute ago   Exited              kube-proxy                0                   3be846a37f831       kube-proxy-9dngv
	08997e7db2aca       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   About a minute ago   Exited              etcd                      0                   43314a6aa6707       etcd-pause-855890
	fe4e98bd2bc2d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   About a minute ago   Exited              kube-scheduler            0                   0ee04d640f242       kube-scheduler-pause-855890
	fecfbd390ded3       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   About a minute ago   Exited              kube-apiserver            0                   7631e8b0c6a13       kube-apiserver-pause-855890
	
	
	==> coredns [317a9379f524cbb808decb995783218cc7254c0808bd3cdbc03b94f7e63149ea] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	[INFO] Reloading complete
	[INFO] 127.0.0.1:42025 - 56170 "HINFO IN 151783329721779275.2056587546879306900. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.070216253s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b05a9cd351437e0fa07813290ba4db602fb5fd329e87fef8dac936cb25fbbdc5] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59063 - 40051 "HINFO IN 1248045618366012909.6732322449841863943. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.0823087s
	
	
	==> describe nodes <==
	Name:               pause-855890
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-855890
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
	                    minikube.k8s.io/name=pause-855890
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_29_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:29:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-855890
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:30:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:30:40 +0000   Sat, 01 Nov 2025 09:29:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:30:40 +0000   Sat, 01 Nov 2025 09:29:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:30:40 +0000   Sat, 01 Nov 2025 09:29:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:30:40 +0000   Sat, 01 Nov 2025 09:29:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.183
	  Hostname:    pause-855890
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	System Info:
	  Machine ID:                 c8263723d57c466d93e7b8644cf81691
	  System UUID:                c8263723-d57c-466d-93e7-b8644cf81691
	  Boot ID:                    52811c4a-8c08-496b-893c-6f968120d3e5
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-czz5l                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     87s
	  kube-system                 etcd-pause-855890                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         92s
	  kube-system                 kube-apiserver-pause-855890             250m (12%)    0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-controller-manager-pause-855890    200m (10%)    0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-proxy-9dngv                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-scheduler-pause-855890             100m (5%)     0 (0%)      0 (0%)           0 (0%)         93s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 86s                kube-proxy       
	  Normal  Starting                 14s                kube-proxy       
	  Normal  Starting                 93s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  93s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     92s                kubelet          Node pause-855890 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    92s                kubelet          Node pause-855890 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                92s                kubelet          Node pause-855890 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  92s                kubelet          Node pause-855890 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           88s                node-controller  Node pause-855890 event: Registered Node pause-855890 in Controller
	  Normal  Starting                 17s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  17s (x8 over 17s)  kubelet          Node pause-855890 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17s (x8 over 17s)  kubelet          Node pause-855890 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17s (x7 over 17s)  kubelet          Node pause-855890 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12s                node-controller  Node pause-855890 event: Registered Node pause-855890 in Controller
	
	
	==> dmesg <==
	[Nov 1 09:28] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000090] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.001082] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[Nov 1 09:29] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000024] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.085904] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.115930] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.103846] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.172456] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.115208] kauditd_printk_skb: 18 callbacks suppressed
	[  +9.383228] kauditd_printk_skb: 222 callbacks suppressed
	[Nov 1 09:30] kauditd_printk_skb: 38 callbacks suppressed
	[  +6.197664] kauditd_printk_skb: 56 callbacks suppressed
	[  +0.614375] kauditd_printk_skb: 260 callbacks suppressed
	[  +1.816193] kauditd_printk_skb: 28 callbacks suppressed
	
	
	==> etcd [08997e7db2aca95d5ecb59b8e74931e0088d93a536271a5655c21a88adf47b67] <==
	{"level":"info","ts":"2025-11-01T09:29:33.366381Z","caller":"traceutil/trace.go:172","msg":"trace[1578796124] transaction","detail":"{read_only:false; response_revision:383; number_of_response:1; }","duration":"134.669075ms","start":"2025-11-01T09:29:33.231645Z","end":"2025-11-01T09:29:33.366314Z","steps":["trace[1578796124] 'process raft request'  (duration: 134.38702ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:29:33.652699Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.786945ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13371919837915374771 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/coredns-66bc5c9577-czz5l.1873d7ffe81e6750\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns-66bc5c9577-czz5l.1873d7ffe81e6750\" value_size:682 lease:4148547801060598596 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-01T09:29:33.652855Z","caller":"traceutil/trace.go:172","msg":"trace[359709472] transaction","detail":"{read_only:false; response_revision:385; number_of_response:1; }","duration":"190.166701ms","start":"2025-11-01T09:29:33.462671Z","end":"2025-11-01T09:29:33.652838Z","steps":["trace[359709472] 'process raft request'  (duration: 62.623508ms)","trace[359709472] 'compare'  (duration: 126.713366ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T09:29:33.860083Z","caller":"traceutil/trace.go:172","msg":"trace[1424517841] linearizableReadLoop","detail":"{readStateIndex:397; appliedIndex:397; }","duration":"106.670498ms","start":"2025-11-01T09:29:33.753397Z","end":"2025-11-01T09:29:33.860068Z","steps":["trace[1424517841] 'read index received'  (duration: 106.666179ms)","trace[1424517841] 'applied index is now lower than readState.Index'  (duration: 3.644µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T09:29:33.861264Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"107.850764ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-czz5l\" limit:1 ","response":"range_response_count:1 size:5630"}
	{"level":"info","ts":"2025-11-01T09:29:33.861324Z","caller":"traceutil/trace.go:172","msg":"trace[351083579] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-czz5l; range_end:; response_count:1; response_revision:385; }","duration":"107.922898ms","start":"2025-11-01T09:29:33.753392Z","end":"2025-11-01T09:29:33.861315Z","steps":["trace[351083579] 'agreement among raft nodes before linearized reading'  (duration: 106.731606ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:29:33.861674Z","caller":"traceutil/trace.go:172","msg":"trace[523220088] transaction","detail":"{read_only:false; response_revision:386; number_of_response:1; }","duration":"203.929384ms","start":"2025-11-01T09:29:33.657613Z","end":"2025-11-01T09:29:33.861543Z","steps":["trace[523220088] 'process raft request'  (duration: 202.507178ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:30:20.555155Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-01T09:30:20.555268Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-855890","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.183:2380"],"advertise-client-urls":["https://192.168.50.183:2379"]}
	{"level":"error","ts":"2025-11-01T09:30:20.555383Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T09:30:20.641543Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T09:30:20.641663Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T09:30:20.641708Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"42038717d2bfb992","current-leader-member-id":"42038717d2bfb992"}
	{"level":"info","ts":"2025-11-01T09:30:20.641791Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-01T09:30:20.641902Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-01T09:30:20.641883Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T09:30:20.641992Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T09:30:20.642002Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-01T09:30:20.642058Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.183:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T09:30:20.642070Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.183:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T09:30:20.642087Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.183:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T09:30:20.649249Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.50.183:2380"}
	{"level":"error","ts":"2025-11-01T09:30:20.649477Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.183:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T09:30:20.649537Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.50.183:2380"}
	{"level":"info","ts":"2025-11-01T09:30:20.649599Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-855890","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.183:2380"],"advertise-client-urls":["https://192.168.50.183:2379"]}
	
	
	==> etcd [2f695868c1e5050f1140aca569a60bff1c122e7424221706218b843055d4559d] <==
	{"level":"warn","ts":"2025-11-01T09:30:39.334735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:39.356323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:39.392200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:39.431855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:39.461945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:39.488062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:39.508322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:39.526193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:39.537432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:39.550412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:39.556812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:39.586946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:39.592413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:39.605527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:39.615226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:39.639920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:39.660239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:39.676630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:39.768451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57040","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T09:30:48.457225Z","caller":"traceutil/trace.go:172","msg":"trace[2066850321] transaction","detail":"{read_only:false; response_revision:474; number_of_response:1; }","duration":"104.365296ms","start":"2025-11-01T09:30:48.352847Z","end":"2025-11-01T09:30:48.457212Z","steps":["trace[2066850321] 'process raft request'  (duration: 104.241587ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:30:48.763422Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"197.171452ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:30:48.763704Z","caller":"traceutil/trace.go:172","msg":"trace[1806969416] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:474; }","duration":"197.539219ms","start":"2025-11-01T09:30:48.566148Z","end":"2025-11-01T09:30:48.763687Z","steps":["trace[1806969416] 'range keys from in-memory index tree'  (duration: 197.058894ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:30:48.763792Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.18035ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-855890\" limit:1 ","response":"range_response_count:1 size:6083"}
	{"level":"info","ts":"2025-11-01T09:30:48.763835Z","caller":"traceutil/trace.go:172","msg":"trace[532645769] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-855890; range_end:; response_count:1; response_revision:474; }","duration":"113.232095ms","start":"2025-11-01T09:30:48.650593Z","end":"2025-11-01T09:30:48.763825Z","steps":["trace[532645769] 'range keys from in-memory index tree'  (duration: 113.052623ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:30:48.937936Z","caller":"traceutil/trace.go:172","msg":"trace[1643631866] transaction","detail":"{read_only:false; response_revision:475; number_of_response:1; }","duration":"161.121001ms","start":"2025-11-01T09:30:48.776801Z","end":"2025-11-01T09:30:48.937922Z","steps":["trace[1643631866] 'process raft request'  (duration: 161.002702ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:30:56 up 2 min,  0 users,  load average: 1.86, 0.72, 0.27
	Linux pause-855890 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Oct 28 16:58:05 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [71b0ac0166bc29fa2a4344cda6805cc8ba9089589fa84ee6e2de3bd9a26d8960] <==
	I1101 09:30:40.515806       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 09:30:40.543893       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:30:40.547820       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 09:30:40.548554       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1101 09:30:40.548784       1 aggregator.go:171] initial CRD sync complete...
	I1101 09:30:40.549250       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 09:30:40.549289       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 09:30:40.549305       1 cache.go:39] Caches are synced for autoregister controller
	I1101 09:30:40.570469       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 09:30:40.574877       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 09:30:40.579306       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 09:30:40.581725       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1101 09:30:40.581786       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 09:30:40.581793       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 09:30:40.582337       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 09:30:40.587451       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1101 09:30:40.615413       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 09:30:41.372049       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:30:41.412078       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:30:42.604991       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:30:42.656199       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 09:30:42.705683       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:30:42.723938       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:30:44.665473       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:30:44.966817       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [fecfbd390ded36fbce68790e962191e98ce02780cad561cb62f869a589724115] <==
	W1101 09:30:20.566729       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.566772       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.566808       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.566846       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.566885       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.576829       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.577085       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.577234       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.577325       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.577787       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.578008       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.578151       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.578434       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.578647       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.578857       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.579000       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.579300       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.577798       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.580675       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.581462       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.581852       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.582100       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.582193       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.582549       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.583555       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [3e7af9f11cce68fead72b9ad0539401f7ebcdafd692c4284566fa26af1d67752] <==
	
	
	==> kube-controller-manager [8842af31785cd52ebbdafa0959eae8cea4aee7dfa504f74935e64f90709ff576] <==
	I1101 09:30:44.664368       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 09:30:44.667466       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 09:30:44.667653       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 09:30:44.669921       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 09:30:44.670064       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 09:30:44.670128       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 09:30:44.670134       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 09:30:44.670149       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 09:30:44.670416       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 09:30:44.674677       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 09:30:44.674751       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:30:44.704795       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 09:30:44.705705       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1101 09:30:44.707731       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 09:30:44.708472       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 09:30:44.713110       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 09:30:44.713225       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 09:30:44.713270       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 09:30:44.713311       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-855890"
	I1101 09:30:44.713586       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 09:30:44.713828       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 09:30:44.714257       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 09:30:44.714510       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 09:30:44.716586       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 09:30:44.717183       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [1aa8d5c2ef0eef267589224de026d832239ec04df5fee97d3944f096c2351184] <==
	I1101 09:30:41.762216       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:30:41.862826       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:30:41.862902       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.183"]
	E1101 09:30:41.862990       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:30:41.933023       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1101 09:30:41.933095       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 09:30:41.933125       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:30:41.949815       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:30:41.950227       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:30:41.950256       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:30:41.958665       1 config.go:309] "Starting node config controller"
	I1101 09:30:41.958691       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:30:41.958714       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:30:41.958959       1 config.go:200] "Starting service config controller"
	I1101 09:30:41.959414       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:30:41.959096       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:30:41.959534       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:30:41.959119       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:30:41.959621       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:30:42.059869       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:30:42.059909       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 09:30:42.059883       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [c9ddabcf5ff167d0756b588b55f5f1adea8970f8165ccf6a8500ebbc7da0692d] <==
	I1101 09:29:30.487472       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:29:30.587650       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:29:30.587683       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.183"]
	E1101 09:29:30.587775       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:29:30.721926       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1101 09:29:30.722076       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 09:29:30.722168       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:29:30.736638       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:29:30.737251       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:29:30.737264       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:29:30.745335       1 config.go:200] "Starting service config controller"
	I1101 09:29:30.745381       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:29:30.745421       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:29:30.745426       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:29:30.745436       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:29:30.745439       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:29:30.753033       1 config.go:309] "Starting node config controller"
	I1101 09:29:30.759985       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:29:30.760001       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:29:30.848811       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:29:30.847325       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:29:30.849821       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2c13e07694a27604a7283ecab14599ca94c057df8e41b2c9ed9bb8fb8b083292] <==
	I1101 09:30:38.567604       1 serving.go:386] Generated self-signed cert in-memory
	I1101 09:30:40.606586       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 09:30:40.606779       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:30:40.616668       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 09:30:40.616743       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 09:30:40.616818       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:30:40.616846       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:30:40.616872       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:30:40.616890       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:30:40.616901       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:30:40.617006       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 09:30:40.718398       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 09:30:40.718538       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:30:40.719071       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kube-scheduler [fe4e98bd2bc2d934f1cf0670af24bc144ea2818c8501b0956e2a6840cec5315b] <==
	E1101 09:29:20.537122       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:29:20.537176       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:29:21.351433       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:29:21.440049       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 09:29:21.450513       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:29:21.454045       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:29:21.498657       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:29:21.516940       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:29:21.527211       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:29:21.592710       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:29:21.616587       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:29:21.644876       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1101 09:29:21.707089       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 09:29:21.730952       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:29:21.785953       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:29:21.811061       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:29:21.873390       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:29:21.967019       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1101 09:29:24.820187       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:30:20.570758       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1101 09:30:20.575594       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1101 09:30:20.577692       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1101 09:30:20.576207       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:30:20.580209       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1101 09:30:20.580250       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 01 09:30:40 pause-855890 kubelet[3636]: I1101 09:30:40.564145    3636 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 01 09:30:40 pause-855890 kubelet[3636]: I1101 09:30:40.565938    3636 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 01 09:30:40 pause-855890 kubelet[3636]: E1101 09:30:40.571332    3636 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-855890\" already exists" pod="kube-system/etcd-pause-855890"
	Nov 01 09:30:40 pause-855890 kubelet[3636]: I1101 09:30:40.571410    3636 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-855890"
	Nov 01 09:30:40 pause-855890 kubelet[3636]: E1101 09:30:40.588901    3636 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-855890\" already exists" pod="kube-system/kube-apiserver-pause-855890"
	Nov 01 09:30:40 pause-855890 kubelet[3636]: I1101 09:30:40.588977    3636 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-855890"
	Nov 01 09:30:40 pause-855890 kubelet[3636]: E1101 09:30:40.625564    3636 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-855890\" already exists" pod="kube-system/kube-controller-manager-pause-855890"
	Nov 01 09:30:40 pause-855890 kubelet[3636]: I1101 09:30:40.625620    3636 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-855890"
	Nov 01 09:30:40 pause-855890 kubelet[3636]: E1101 09:30:40.649287    3636 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-855890\" already exists" pod="kube-system/kube-scheduler-pause-855890"
	Nov 01 09:30:40 pause-855890 kubelet[3636]: I1101 09:30:40.829968    3636 scope.go:117] "RemoveContainer" containerID="3e7af9f11cce68fead72b9ad0539401f7ebcdafd692c4284566fa26af1d67752"
	Nov 01 09:30:41 pause-855890 kubelet[3636]: I1101 09:30:41.219380    3636 apiserver.go:52] "Watching apiserver"
	Nov 01 09:30:41 pause-855890 kubelet[3636]: I1101 09:30:41.268876    3636 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 01 09:30:41 pause-855890 kubelet[3636]: I1101 09:30:41.359197    3636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74e9d2ed-3e06-4b92-b71e-0d3520d7d64b-xtables-lock\") pod \"kube-proxy-9dngv\" (UID: \"74e9d2ed-3e06-4b92-b71e-0d3520d7d64b\") " pod="kube-system/kube-proxy-9dngv"
	Nov 01 09:30:41 pause-855890 kubelet[3636]: I1101 09:30:41.360455    3636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74e9d2ed-3e06-4b92-b71e-0d3520d7d64b-lib-modules\") pod \"kube-proxy-9dngv\" (UID: \"74e9d2ed-3e06-4b92-b71e-0d3520d7d64b\") " pod="kube-system/kube-proxy-9dngv"
	Nov 01 09:30:41 pause-855890 kubelet[3636]: I1101 09:30:41.510684    3636 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-855890"
	Nov 01 09:30:41 pause-855890 kubelet[3636]: I1101 09:30:41.510693    3636 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-855890"
	Nov 01 09:30:41 pause-855890 kubelet[3636]: I1101 09:30:41.510782    3636 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-855890"
	Nov 01 09:30:41 pause-855890 kubelet[3636]: I1101 09:30:41.511127    3636 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-855890"
	Nov 01 09:30:41 pause-855890 kubelet[3636]: I1101 09:30:41.526111    3636 scope.go:117] "RemoveContainer" containerID="c9ddabcf5ff167d0756b588b55f5f1adea8970f8165ccf6a8500ebbc7da0692d"
	Nov 01 09:30:41 pause-855890 kubelet[3636]: E1101 09:30:41.632042    3636 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-855890\" already exists" pod="kube-system/kube-apiserver-pause-855890"
	Nov 01 09:30:41 pause-855890 kubelet[3636]: E1101 09:30:41.644935    3636 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-855890\" already exists" pod="kube-system/etcd-pause-855890"
	Nov 01 09:30:41 pause-855890 kubelet[3636]: E1101 09:30:41.645651    3636 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-855890\" already exists" pod="kube-system/kube-controller-manager-pause-855890"
	Nov 01 09:30:41 pause-855890 kubelet[3636]: E1101 09:30:41.666933    3636 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-855890\" already exists" pod="kube-system/kube-scheduler-pause-855890"
	Nov 01 09:30:49 pause-855890 kubelet[3636]: E1101 09:30:49.521283    3636 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761989449520570227  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Nov 01 09:30:49 pause-855890 kubelet[3636]: E1101 09:30:49.521891    3636 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761989449520570227  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-855890 -n pause-855890
helpers_test.go:269: (dbg) Run:  kubectl --context pause-855890 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-855890 -n pause-855890
helpers_test.go:252: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-855890 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-855890 logs -n 25: (1.345039537s)
helpers_test.go:260: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                          ARGS                                                                                                           │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-997526 sudo docker system info                                                                                                                                                                                │ cilium-997526             │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-997526 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                               │ cilium-997526             │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-997526 sudo systemctl cat cri-docker --no-pager                                                                                                                                                               │ cilium-997526             │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-997526 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                          │ cilium-997526             │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-997526 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                    │ cilium-997526             │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-997526 sudo cri-dockerd --version                                                                                                                                                                             │ cilium-997526             │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │                     │
	│ ssh     │ -p NoKubernetes-709275 sudo systemctl is-active --quiet service kubelet                                                                                                                                                 │ NoKubernetes-709275       │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-997526 sudo systemctl status containerd --all --full --no-pager                                                                                                                                               │ cilium-997526             │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-997526 sudo systemctl cat containerd --no-pager                                                                                                                                                               │ cilium-997526             │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │                     │
	│ delete  │ -p NoKubernetes-709275                                                                                                                                                                                                  │ NoKubernetes-709275       │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:29 UTC │
	│ ssh     │ -p cilium-997526 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                        │ cilium-997526             │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-997526 sudo cat /etc/containerd/config.toml                                                                                                                                                                   │ cilium-997526             │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-997526 sudo containerd config dump                                                                                                                                                                            │ cilium-997526             │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-997526 sudo systemctl status crio --all --full --no-pager                                                                                                                                                     │ cilium-997526             │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-997526 sudo systemctl cat crio --no-pager                                                                                                                                                                     │ cilium-997526             │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-997526 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                           │ cilium-997526             │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │                     │
	│ ssh     │ -p cilium-997526 sudo crio config                                                                                                                                                                                       │ cilium-997526             │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │                     │
	│ delete  │ -p cilium-997526                                                                                                                                                                                                        │ cilium-997526             │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:29 UTC │
	│ start   │ -p guest-649821 --no-kubernetes --driver=kvm2  --container-runtime=crio                                                                                                                                                 │ guest-649821              │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:29 UTC │
	│ start   │ -p cert-expiration-602924 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio                                                                                                                    │ cert-expiration-602924    │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:30 UTC │
	│ start   │ -p force-systemd-flag-806647 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio                                                                                               │ force-systemd-flag-806647 │ jenkins │ v1.37.0 │ 01 Nov 25 09:29 UTC │ 01 Nov 25 09:30 UTC │
	│ start   │ -p pause-855890 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio                                                                                                                                          │ pause-855890              │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ 01 Nov 25 09:30 UTC │
	│ ssh     │ force-systemd-flag-806647 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                    │ force-systemd-flag-806647 │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ 01 Nov 25 09:30 UTC │
	│ delete  │ -p force-systemd-flag-806647                                                                                                                                                                                            │ force-systemd-flag-806647 │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │ 01 Nov 25 09:30 UTC │
	│ start   │ -p cert-options-414547 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio │ cert-options-414547       │ jenkins │ v1.37.0 │ 01 Nov 25 09:30 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:30:42
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:30:42.156125   40614 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:30:42.156310   40614 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:30:42.156315   40614 out.go:374] Setting ErrFile to fd 2...
	I1101 09:30:42.156320   40614 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:30:42.156674   40614 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5912/.minikube/bin
	I1101 09:30:42.157507   40614 out.go:368] Setting JSON to false
	I1101 09:30:42.158842   40614 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4389,"bootTime":1761985053,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:30:42.158944   40614 start.go:143] virtualization: kvm guest
	I1101 09:30:42.162111   40614 out.go:179] * [cert-options-414547] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:30:42.163424   40614 notify.go:221] Checking for updates...
	I1101 09:30:42.163462   40614 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 09:30:42.165034   40614 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:30:42.166373   40614 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-5912/kubeconfig
	I1101 09:30:42.167873   40614 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5912/.minikube
	I1101 09:30:42.169360   40614 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:30:42.170870   40614 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:30:42.173039   40614 config.go:182] Loaded profile config "cert-expiration-602924": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:30:42.173198   40614 config.go:182] Loaded profile config "guest-649821": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1101 09:30:42.173363   40614 config.go:182] Loaded profile config "kubernetes-upgrade-133315": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:30:42.173582   40614 config.go:182] Loaded profile config "pause-855890": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:30:42.173766   40614 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:30:42.216396   40614 out.go:179] * Using the kvm2 driver based on user configuration
	I1101 09:30:42.217589   40614 start.go:309] selected driver: kvm2
	I1101 09:30:42.217595   40614 start.go:930] validating driver "kvm2" against <nil>
	I1101 09:30:42.217612   40614 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:30:42.218409   40614 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 09:30:42.218635   40614 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1101 09:30:42.218648   40614 cni.go:84] Creating CNI manager for ""
	I1101 09:30:42.218686   40614 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 09:30:42.218690   40614 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1101 09:30:42.218719   40614 start.go:353] cluster config:
	{Name:cert-options-414547 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-options-414547 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.
1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterva
l:1m0s}
	I1101 09:30:42.218887   40614 iso.go:125] acquiring lock: {Name:mk345092679db7c379cbaa00125c4f18e2b4a125 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:30:42.221326   40614 out.go:179] * Starting "cert-options-414547" primary control-plane node in "cert-options-414547" cluster
	I1101 09:30:42.222521   40614 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:30:42.222565   40614 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-5912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 09:30:42.222585   40614 cache.go:59] Caching tarball of preloaded images
	I1101 09:30:42.222685   40614 preload.go:233] Found /home/jenkins/minikube-integration/21835-5912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 09:30:42.222699   40614 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:30:42.222826   40614 profile.go:143] Saving config to /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/cert-options-414547/config.json ...
	I1101 09:30:42.222848   40614 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/cert-options-414547/config.json: {Name:mk3b7e467ef9ca601bd25e5919e28e9954756bf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:30:42.223039   40614 start.go:360] acquireMachinesLock for cert-options-414547: {Name:mk8049b4e421873947dfa0bcd96201ccb1e1825c Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1101 09:30:42.223088   40614 start.go:364] duration metric: took 29.48µs to acquireMachinesLock for "cert-options-414547"
	I1101 09:30:42.223110   40614 start.go:93] Provisioning new machine with config: &{Name:cert-options-414547 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8555 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.34.1 ClusterName:cert-options-414547 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[localhost www.google.com] APIServerIPs:[127.0.0.1 192.168.15.15] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8555 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8555 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:30:42.223183   40614 start.go:125] createHost starting for "" (driver="kvm2")
	I1101 09:30:40.566999   40325 api_server.go:279] https://192.168.50.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 09:30:40.567047   40325 api_server.go:103] status: https://192.168.50.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 09:30:40.864564   40325 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I1101 09:30:40.876377   40325 api_server.go:279] https://192.168.50.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 09:30:40.876404   40325 api_server.go:103] status: https://192.168.50.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 09:30:41.364042   40325 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I1101 09:30:41.381676   40325 api_server.go:279] https://192.168.50.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 09:30:41.381710   40325 api_server.go:103] status: https://192.168.50.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 09:30:41.864371   40325 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I1101 09:30:41.884606   40325 api_server.go:279] https://192.168.50.183:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 09:30:41.884639   40325 api_server.go:103] status: https://192.168.50.183:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 09:30:42.364372   40325 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I1101 09:30:42.370923   40325 api_server.go:279] https://192.168.50.183:8443/healthz returned 200:
	ok
	I1101 09:30:42.380257   40325 api_server.go:141] control plane version: v1.34.1
	I1101 09:30:42.380289   40325 api_server.go:131] duration metric: took 3.016804839s to wait for apiserver health ...
	I1101 09:30:42.380302   40325 cni.go:84] Creating CNI manager for ""
	I1101 09:30:42.380310   40325 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 09:30:42.382748   40325 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1101 09:30:42.384302   40325 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1101 09:30:42.403958   40325 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1101 09:30:42.428428   40325 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:30:42.442346   40325 system_pods.go:59] 6 kube-system pods found
	I1101 09:30:42.442385   40325 system_pods.go:61] "coredns-66bc5c9577-czz5l" [0464c9e4-46a6-477e-94b6-fed9a6eb2966] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:30:42.442394   40325 system_pods.go:61] "etcd-pause-855890" [229a3488-3c8b-45a2-8535-95dd1be1dcc0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:30:42.442407   40325 system_pods.go:61] "kube-apiserver-pause-855890" [f6e98a8b-9606-43e9-a2ee-cae88a417568] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:30:42.442418   40325 system_pods.go:61] "kube-controller-manager-pause-855890" [41c64391-6367-45d0-af32-82b61b2e385f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:30:42.442432   40325 system_pods.go:61] "kube-proxy-9dngv" [74e9d2ed-3e06-4b92-b71e-0d3520d7d64b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 09:30:42.442449   40325 system_pods.go:61] "kube-scheduler-pause-855890" [471d7f99-bc78-44a6-9ab3-0004d3b1fd4d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:30:42.442460   40325 system_pods.go:74] duration metric: took 14.007924ms to wait for pod list to return data ...
	I1101 09:30:42.442470   40325 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:30:42.446649   40325 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 09:30:42.446674   40325 node_conditions.go:123] node cpu capacity is 2
	I1101 09:30:42.446684   40325 node_conditions.go:105] duration metric: took 4.210831ms to run NodePressure ...
	I1101 09:30:42.446737   40325 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1101 09:30:42.740857   40325 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1101 09:30:42.746250   40325 kubeadm.go:744] kubelet initialised
	I1101 09:30:42.746276   40325 kubeadm.go:745] duration metric: took 5.391237ms waiting for restarted kubelet to initialise ...
	I1101 09:30:42.746294   40325 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 09:30:42.768144   40325 ops.go:34] apiserver oom_adj: -16
	I1101 09:30:42.768170   40325 kubeadm.go:602] duration metric: took 6.919299576s to restartPrimaryControlPlane
	I1101 09:30:42.768183   40325 kubeadm.go:403] duration metric: took 7.178477096s to StartCluster
	I1101 09:30:42.768203   40325 settings.go:142] acquiring lock: {Name:mk818d33e162ca33774e3ab05f6aac30f8feaf64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:30:42.768316   40325 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21835-5912/kubeconfig
	I1101 09:30:42.769771   40325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21835-5912/kubeconfig: {Name:mk599bec02e6b7062c3926243176124a4bc71dfd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:30:42.770082   40325 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.50.183 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:30:42.770416   40325 config.go:182] Loaded profile config "pause-855890": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:30:42.770471   40325 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:30:42.771785   40325 out.go:179] * Enabled addons: 
	I1101 09:30:42.771802   40325 out.go:179] * Verifying Kubernetes components...
	I1101 09:30:42.731367   36048 api_server.go:269] stopped: https://192.168.39.77:8443/healthz: Get "https://192.168.39.77:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1101 09:30:42.731431   36048 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:30:42.731507   36048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:30:42.790399   36048 cri.go:89] found id: "853cf39dd0a48500a7f095970a87b6422244b73af3e001ff3d3cd7b5347125a4"
	I1101 09:30:42.790423   36048 cri.go:89] found id: "223e8bd47a8009d1421af6bceab47d9d178a2c4d9905527eccdc3ff8e8c25ed9"
	I1101 09:30:42.790429   36048 cri.go:89] found id: ""
	I1101 09:30:42.790442   36048 logs.go:282] 2 containers: [853cf39dd0a48500a7f095970a87b6422244b73af3e001ff3d3cd7b5347125a4 223e8bd47a8009d1421af6bceab47d9d178a2c4d9905527eccdc3ff8e8c25ed9]
	I1101 09:30:42.790514   36048 ssh_runner.go:195] Run: which crictl
	I1101 09:30:42.796902   36048 ssh_runner.go:195] Run: which crictl
	I1101 09:30:42.802663   36048 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:30:42.802742   36048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:30:42.855650   36048 cri.go:89] found id: "f9dd7881e6bb63603f4ed3e4da6c514d13e60e1ead5871d13f37ae6ff8ec3f92"
	I1101 09:30:42.855679   36048 cri.go:89] found id: ""
	I1101 09:30:42.855689   36048 logs.go:282] 1 containers: [f9dd7881e6bb63603f4ed3e4da6c514d13e60e1ead5871d13f37ae6ff8ec3f92]
	I1101 09:30:42.855754   36048 ssh_runner.go:195] Run: which crictl
	I1101 09:30:42.862061   36048 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:30:42.862153   36048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:30:42.907354   36048 cri.go:89] found id: "829a7049ca8686e23aa5a0266e3f8e6b74f1e010bd31bb273affd0bf84d915be"
	I1101 09:30:42.907378   36048 cri.go:89] found id: ""
	I1101 09:30:42.907388   36048 logs.go:282] 1 containers: [829a7049ca8686e23aa5a0266e3f8e6b74f1e010bd31bb273affd0bf84d915be]
	I1101 09:30:42.907448   36048 ssh_runner.go:195] Run: which crictl
	I1101 09:30:42.912553   36048 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:30:42.912627   36048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:30:42.963362   36048 cri.go:89] found id: "a8d9bf8913ecbdb49a1e5d6e4c23fecc667057073231a06fe375be2a99eb0634"
	I1101 09:30:42.963388   36048 cri.go:89] found id: "4a827f9d08c79b703503f3602ae37b95dc8f688f3ef925017a1e492167c41558"
	I1101 09:30:42.963393   36048 cri.go:89] found id: ""
	I1101 09:30:42.963402   36048 logs.go:282] 2 containers: [a8d9bf8913ecbdb49a1e5d6e4c23fecc667057073231a06fe375be2a99eb0634 4a827f9d08c79b703503f3602ae37b95dc8f688f3ef925017a1e492167c41558]
	I1101 09:30:42.963463   36048 ssh_runner.go:195] Run: which crictl
	I1101 09:30:42.969033   36048 ssh_runner.go:195] Run: which crictl
	I1101 09:30:42.973483   36048 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:30:42.973552   36048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:30:43.024410   36048 cri.go:89] found id: "c7f0148e52e4924e35347f63207cf72eee31c760c186643e54000ca0a18006b6"
	I1101 09:30:43.024436   36048 cri.go:89] found id: ""
	I1101 09:30:43.024446   36048 logs.go:282] 1 containers: [c7f0148e52e4924e35347f63207cf72eee31c760c186643e54000ca0a18006b6]
	I1101 09:30:43.024509   36048 ssh_runner.go:195] Run: which crictl
	I1101 09:30:43.029093   36048 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:30:43.029172   36048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:30:43.073486   36048 cri.go:89] found id: "936fd2ed292ea158a66e321e5e5c589755deca7d69ab01e0bbea8be585b766ea"
	I1101 09:30:43.073511   36048 cri.go:89] found id: ""
	I1101 09:30:43.073521   36048 logs.go:282] 1 containers: [936fd2ed292ea158a66e321e5e5c589755deca7d69ab01e0bbea8be585b766ea]
	I1101 09:30:43.073585   36048 ssh_runner.go:195] Run: which crictl
	I1101 09:30:43.079231   36048 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:30:43.079305   36048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:30:43.133027   36048 cri.go:89] found id: ""
	I1101 09:30:43.133057   36048 logs.go:282] 0 containers: []
	W1101 09:30:43.133068   36048 logs.go:284] No container was found matching "kindnet"
	I1101 09:30:43.133077   36048 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:30:43.133153   36048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:30:43.177195   36048 cri.go:89] found id: "705a325c05b6c0085e7ff8b3e258e47d9fe8412452bed3638031e0f6c545de9d"
	I1101 09:30:43.177243   36048 cri.go:89] found id: ""
	I1101 09:30:43.177253   36048 logs.go:282] 1 containers: [705a325c05b6c0085e7ff8b3e258e47d9fe8412452bed3638031e0f6c545de9d]
	I1101 09:30:43.177340   36048 ssh_runner.go:195] Run: which crictl
	I1101 09:30:43.182098   36048 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:30:43.182128   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1101 09:30:42.773253   40325 addons.go:515] duration metric: took 2.783608ms for enable addons: enabled=[]
	I1101 09:30:42.773294   40325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:30:43.018240   40325 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:30:43.047023   40325 node_ready.go:35] waiting up to 6m0s for node "pause-855890" to be "Ready" ...
	I1101 09:30:43.051015   40325 node_ready.go:49] node "pause-855890" is "Ready"
	I1101 09:30:43.051065   40325 node_ready.go:38] duration metric: took 3.965977ms for node "pause-855890" to be "Ready" ...
	I1101 09:30:43.051088   40325 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:30:43.051148   40325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:30:43.078816   40325 api_server.go:72] duration metric: took 308.69377ms to wait for apiserver process to appear ...
	I1101 09:30:43.078846   40325 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:30:43.078870   40325 api_server.go:253] Checking apiserver healthz at https://192.168.50.183:8443/healthz ...
	I1101 09:30:43.086353   40325 api_server.go:279] https://192.168.50.183:8443/healthz returned 200:
	ok
	I1101 09:30:43.087710   40325 api_server.go:141] control plane version: v1.34.1
	I1101 09:30:43.087733   40325 api_server.go:131] duration metric: took 8.879713ms to wait for apiserver health ...
	I1101 09:30:43.087744   40325 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:30:43.091519   40325 system_pods.go:59] 6 kube-system pods found
	I1101 09:30:43.091548   40325 system_pods.go:61] "coredns-66bc5c9577-czz5l" [0464c9e4-46a6-477e-94b6-fed9a6eb2966] Running
	I1101 09:30:43.091560   40325 system_pods.go:61] "etcd-pause-855890" [229a3488-3c8b-45a2-8535-95dd1be1dcc0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:30:43.091569   40325 system_pods.go:61] "kube-apiserver-pause-855890" [f6e98a8b-9606-43e9-a2ee-cae88a417568] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:30:43.091580   40325 system_pods.go:61] "kube-controller-manager-pause-855890" [41c64391-6367-45d0-af32-82b61b2e385f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:30:43.091588   40325 system_pods.go:61] "kube-proxy-9dngv" [74e9d2ed-3e06-4b92-b71e-0d3520d7d64b] Running
	I1101 09:30:43.091607   40325 system_pods.go:61] "kube-scheduler-pause-855890" [471d7f99-bc78-44a6-9ab3-0004d3b1fd4d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:30:43.091615   40325 system_pods.go:74] duration metric: took 3.864338ms to wait for pod list to return data ...
	I1101 09:30:43.091623   40325 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:30:43.096713   40325 default_sa.go:45] found service account: "default"
	I1101 09:30:43.096741   40325 default_sa.go:55] duration metric: took 5.110825ms for default service account to be created ...
	I1101 09:30:43.096754   40325 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:30:43.101053   40325 system_pods.go:86] 6 kube-system pods found
	I1101 09:30:43.101091   40325 system_pods.go:89] "coredns-66bc5c9577-czz5l" [0464c9e4-46a6-477e-94b6-fed9a6eb2966] Running
	I1101 09:30:43.101104   40325 system_pods.go:89] "etcd-pause-855890" [229a3488-3c8b-45a2-8535-95dd1be1dcc0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:30:43.101114   40325 system_pods.go:89] "kube-apiserver-pause-855890" [f6e98a8b-9606-43e9-a2ee-cae88a417568] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:30:43.101127   40325 system_pods.go:89] "kube-controller-manager-pause-855890" [41c64391-6367-45d0-af32-82b61b2e385f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:30:43.101134   40325 system_pods.go:89] "kube-proxy-9dngv" [74e9d2ed-3e06-4b92-b71e-0d3520d7d64b] Running
	I1101 09:30:43.101143   40325 system_pods.go:89] "kube-scheduler-pause-855890" [471d7f99-bc78-44a6-9ab3-0004d3b1fd4d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:30:43.101152   40325 system_pods.go:126] duration metric: took 4.390473ms to wait for k8s-apps to be running ...
	I1101 09:30:43.101163   40325 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:30:43.101236   40325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:30:43.126362   40325 system_svc.go:56] duration metric: took 25.189413ms WaitForService to wait for kubelet
	I1101 09:30:43.126392   40325 kubeadm.go:587] duration metric: took 356.274473ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:30:43.126412   40325 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:30:43.134163   40325 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1101 09:30:43.134195   40325 node_conditions.go:123] node cpu capacity is 2
	I1101 09:30:43.134233   40325 node_conditions.go:105] duration metric: took 7.814319ms to run NodePressure ...
	I1101 09:30:43.134250   40325 start.go:242] waiting for startup goroutines ...
	I1101 09:30:43.134264   40325 start.go:247] waiting for cluster config update ...
	I1101 09:30:43.134276   40325 start.go:256] writing updated cluster config ...
	I1101 09:30:43.134686   40325 ssh_runner.go:195] Run: rm -f paused
	I1101 09:30:43.141856   40325 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:30:43.142801   40325 kapi.go:59] client config for pause-855890: &rest.Config{Host:"https://192.168.50.183:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21835-5912/.minikube/profiles/pause-855890/client.crt", KeyFile:"/home/jenkins/minikube-integration/21835-5912/.minikube/profiles/pause-855890/client.key", CAFile:"/home/jenkins/minikube-integration/21835-5912/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1101 09:30:43.146172   40325 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-czz5l" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:30:43.151050   40325 pod_ready.go:94] pod "coredns-66bc5c9577-czz5l" is "Ready"
	I1101 09:30:43.151078   40325 pod_ready.go:86] duration metric: took 4.878579ms for pod "coredns-66bc5c9577-czz5l" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:30:43.153331   40325 pod_ready.go:83] waiting for pod "etcd-pause-855890" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 09:30:45.159744   40325 pod_ready.go:104] pod "etcd-pause-855890" is not "Ready", error: <nil>
	I1101 09:30:42.225454   40614 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1101 09:30:42.225584   40614 start.go:159] libmachine.API.Create for "cert-options-414547" (driver="kvm2")
	I1101 09:30:42.225623   40614 client.go:173] LocalClient.Create starting
	I1101 09:30:42.225680   40614 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21835-5912/.minikube/certs/ca.pem
	I1101 09:30:42.225707   40614 main.go:143] libmachine: Decoding PEM data...
	I1101 09:30:42.225720   40614 main.go:143] libmachine: Parsing certificate...
	I1101 09:30:42.225775   40614 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21835-5912/.minikube/certs/cert.pem
	I1101 09:30:42.225790   40614 main.go:143] libmachine: Decoding PEM data...
	I1101 09:30:42.225805   40614 main.go:143] libmachine: Parsing certificate...
	I1101 09:30:42.226149   40614 main.go:143] libmachine: creating domain...
	I1101 09:30:42.226154   40614 main.go:143] libmachine: creating network...
	I1101 09:30:42.227658   40614 main.go:143] libmachine: found existing default network
	I1101 09:30:42.227851   40614 main.go:143] libmachine: <network connections='4'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 09:30:42.228644   40614 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:98:e9:70} reservation:<nil>}
	I1101 09:30:42.229512   40614 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:fe:b9:8d} reservation:<nil>}
	I1101 09:30:42.230460   40614 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:de:c4:3e} reservation:<nil>}
	I1101 09:30:42.231629   40614 network.go:211] skipping subnet 192.168.72.0/24 that is taken: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.72.1 IfaceMTU:1500 IfaceMAC:52:54:00:72:53:ad} reservation:<nil>}
	I1101 09:30:42.232956   40614 network.go:206] using free private subnet 192.168.83.0/24: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e3a840}
	I1101 09:30:42.233044   40614 main.go:143] libmachine: defining private network:
	
	<network>
	  <name>mk-cert-options-414547</name>
	  <dns enable='no'/>
	  <ip address='192.168.83.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.83.2' end='192.168.83.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 09:30:42.239237   40614 main.go:143] libmachine: creating private network mk-cert-options-414547 192.168.83.0/24...
	I1101 09:30:42.321104   40614 main.go:143] libmachine: private network mk-cert-options-414547 192.168.83.0/24 created
	I1101 09:30:42.321380   40614 main.go:143] libmachine: <network>
	  <name>mk-cert-options-414547</name>
	  <uuid>cd7f132d-8dcf-4703-ac4f-189f6b585de9</uuid>
	  <bridge name='virbr5' stp='on' delay='0'/>
	  <mac address='52:54:00:49:68:5d'/>
	  <dns enable='no'/>
	  <ip address='192.168.83.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.83.2' end='192.168.83.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1101 09:30:42.321416   40614 main.go:143] libmachine: setting up store path in /home/jenkins/minikube-integration/21835-5912/.minikube/machines/cert-options-414547 ...
	I1101 09:30:42.321441   40614 main.go:143] libmachine: building disk image from file:///home/jenkins/minikube-integration/21835-5912/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso
	I1101 09:30:42.321447   40614 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/21835-5912/.minikube
	I1101 09:30:42.321556   40614 main.go:143] libmachine: Downloading /home/jenkins/minikube-integration/21835-5912/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/21835-5912/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso...
	I1101 09:30:42.556808   40614 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/21835-5912/.minikube/machines/cert-options-414547/id_rsa...
	I1101 09:30:42.894999   40614 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/21835-5912/.minikube/machines/cert-options-414547/cert-options-414547.rawdisk...
	I1101 09:30:42.895041   40614 main.go:143] libmachine: Writing magic tar header
	I1101 09:30:42.895101   40614 main.go:143] libmachine: Writing SSH key tar header
	I1101 09:30:42.895202   40614 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/21835-5912/.minikube/machines/cert-options-414547 ...
	I1101 09:30:42.895305   40614 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21835-5912/.minikube/machines/cert-options-414547
	I1101 09:30:42.895341   40614 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21835-5912/.minikube/machines/cert-options-414547 (perms=drwx------)
	I1101 09:30:42.895359   40614 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21835-5912/.minikube/machines
	I1101 09:30:42.895373   40614 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21835-5912/.minikube/machines (perms=drwxr-xr-x)
	I1101 09:30:42.895387   40614 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21835-5912/.minikube
	I1101 09:30:42.895400   40614 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21835-5912/.minikube (perms=drwxr-xr-x)
	I1101 09:30:42.895411   40614 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/21835-5912
	I1101 09:30:42.895423   40614 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration/21835-5912 (perms=drwxrwxr-x)
	I1101 09:30:42.895436   40614 main.go:143] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1101 09:30:42.895447   40614 main.go:143] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1101 09:30:42.895456   40614 main.go:143] libmachine: checking permissions on dir: /home/jenkins
	I1101 09:30:42.895465   40614 main.go:143] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1101 09:30:42.895476   40614 main.go:143] libmachine: checking permissions on dir: /home
	I1101 09:30:42.895484   40614 main.go:143] libmachine: skipping /home - not owner
	I1101 09:30:42.895489   40614 main.go:143] libmachine: defining domain...
	I1101 09:30:42.896906   40614 main.go:143] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>cert-options-414547</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/21835-5912/.minikube/machines/cert-options-414547/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/21835-5912/.minikube/machines/cert-options-414547/cert-options-414547.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-cert-options-414547'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1101 09:30:42.905632   40614 main.go:143] libmachine: domain cert-options-414547 has defined MAC address 52:54:00:df:49:94 in network default
	I1101 09:30:42.906411   40614 main.go:143] libmachine: domain cert-options-414547 has defined MAC address 52:54:00:1f:e9:df in network mk-cert-options-414547
	I1101 09:30:42.906424   40614 main.go:143] libmachine: starting domain...
	I1101 09:30:42.906427   40614 main.go:143] libmachine: ensuring networks are active...
	I1101 09:30:42.907546   40614 main.go:143] libmachine: Ensuring network default is active
	I1101 09:30:42.908169   40614 main.go:143] libmachine: Ensuring network mk-cert-options-414547 is active
	I1101 09:30:42.909245   40614 main.go:143] libmachine: getting domain XML...
	I1101 09:30:42.910637   40614 main.go:143] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>cert-options-414547</name>
	  <uuid>d1ca6388-16c4-4754-acbe-6531e17fb0b8</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/21835-5912/.minikube/machines/cert-options-414547/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/21835-5912/.minikube/machines/cert-options-414547/cert-options-414547.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:1f:e9:df'/>
	      <source network='mk-cert-options-414547'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:df:49:94'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1101 09:30:44.330339   40614 main.go:143] libmachine: waiting for domain to start...
	I1101 09:30:44.331743   40614 main.go:143] libmachine: domain is now running
	I1101 09:30:44.331751   40614 main.go:143] libmachine: waiting for IP...
	I1101 09:30:44.332517   40614 main.go:143] libmachine: domain cert-options-414547 has defined MAC address 52:54:00:1f:e9:df in network mk-cert-options-414547
	I1101 09:30:44.333016   40614 main.go:143] libmachine: no network interface addresses found for domain cert-options-414547 (source=lease)
	I1101 09:30:44.333020   40614 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:30:44.333337   40614 main.go:143] libmachine: unable to find current IP address of domain cert-options-414547 in network mk-cert-options-414547 (interfaces detected: [])
	I1101 09:30:44.333366   40614 retry.go:31] will retry after 188.233686ms: waiting for domain to come up
	I1101 09:30:44.523749   40614 main.go:143] libmachine: domain cert-options-414547 has defined MAC address 52:54:00:1f:e9:df in network mk-cert-options-414547
	I1101 09:30:44.524515   40614 main.go:143] libmachine: no network interface addresses found for domain cert-options-414547 (source=lease)
	I1101 09:30:44.524523   40614 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:30:44.524940   40614 main.go:143] libmachine: unable to find current IP address of domain cert-options-414547 in network mk-cert-options-414547 (interfaces detected: [])
	I1101 09:30:44.524973   40614 retry.go:31] will retry after 388.892821ms: waiting for domain to come up
	I1101 09:30:44.915566   40614 main.go:143] libmachine: domain cert-options-414547 has defined MAC address 52:54:00:1f:e9:df in network mk-cert-options-414547
	I1101 09:30:44.916321   40614 main.go:143] libmachine: no network interface addresses found for domain cert-options-414547 (source=lease)
	I1101 09:30:44.916332   40614 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:30:44.916667   40614 main.go:143] libmachine: unable to find current IP address of domain cert-options-414547 in network mk-cert-options-414547 (interfaces detected: [])
	I1101 09:30:44.916697   40614 retry.go:31] will retry after 379.270751ms: waiting for domain to come up
	I1101 09:30:45.297026   40614 main.go:143] libmachine: domain cert-options-414547 has defined MAC address 52:54:00:1f:e9:df in network mk-cert-options-414547
	I1101 09:30:45.297779   40614 main.go:143] libmachine: no network interface addresses found for domain cert-options-414547 (source=lease)
	I1101 09:30:45.297790   40614 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:30:45.298189   40614 main.go:143] libmachine: unable to find current IP address of domain cert-options-414547 in network mk-cert-options-414547 (interfaces detected: [])
	I1101 09:30:45.298233   40614 retry.go:31] will retry after 467.831668ms: waiting for domain to come up
	I1101 09:30:45.767980   40614 main.go:143] libmachine: domain cert-options-414547 has defined MAC address 52:54:00:1f:e9:df in network mk-cert-options-414547
	I1101 09:30:45.768588   40614 main.go:143] libmachine: no network interface addresses found for domain cert-options-414547 (source=lease)
	I1101 09:30:45.768597   40614 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:30:45.768985   40614 main.go:143] libmachine: unable to find current IP address of domain cert-options-414547 in network mk-cert-options-414547 (interfaces detected: [])
	I1101 09:30:45.769029   40614 retry.go:31] will retry after 588.768021ms: waiting for domain to come up
	I1101 09:30:46.359958   40614 main.go:143] libmachine: domain cert-options-414547 has defined MAC address 52:54:00:1f:e9:df in network mk-cert-options-414547
	I1101 09:30:46.360695   40614 main.go:143] libmachine: no network interface addresses found for domain cert-options-414547 (source=lease)
	I1101 09:30:46.360704   40614 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:30:46.361095   40614 main.go:143] libmachine: unable to find current IP address of domain cert-options-414547 in network mk-cert-options-414547 (interfaces detected: [])
	I1101 09:30:46.361126   40614 retry.go:31] will retry after 832.145632ms: waiting for domain to come up
	I1101 09:30:48.761156   36048 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (5.57900129s)
	W1101 09:30:48.761254   36048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp 127.0.0.1:8443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:36246->127.0.0.1:8443: read: connection reset by peer
	 output: 
	** stderr ** 
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp 127.0.0.1:8443: connect: connection refused - error from a previous attempt: read tcp 127.0.0.1:36246->127.0.0.1:8443: read: connection reset by peer
	
	** /stderr **
	I1101 09:30:48.761271   36048 logs.go:123] Gathering logs for etcd [f9dd7881e6bb63603f4ed3e4da6c514d13e60e1ead5871d13f37ae6ff8ec3f92] ...
	I1101 09:30:48.761287   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9dd7881e6bb63603f4ed3e4da6c514d13e60e1ead5871d13f37ae6ff8ec3f92"
	I1101 09:30:48.823695   36048 logs.go:123] Gathering logs for coredns [829a7049ca8686e23aa5a0266e3f8e6b74f1e010bd31bb273affd0bf84d915be] ...
	I1101 09:30:48.823746   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 829a7049ca8686e23aa5a0266e3f8e6b74f1e010bd31bb273affd0bf84d915be"
	I1101 09:30:48.865957   36048 logs.go:123] Gathering logs for kube-scheduler [a8d9bf8913ecbdb49a1e5d6e4c23fecc667057073231a06fe375be2a99eb0634] ...
	I1101 09:30:48.865993   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8d9bf8913ecbdb49a1e5d6e4c23fecc667057073231a06fe375be2a99eb0634"
	I1101 09:30:48.927443   36048 logs.go:123] Gathering logs for kube-scheduler [4a827f9d08c79b703503f3602ae37b95dc8f688f3ef925017a1e492167c41558] ...
	I1101 09:30:48.927484   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a827f9d08c79b703503f3602ae37b95dc8f688f3ef925017a1e492167c41558"
	I1101 09:30:48.975530   36048 logs.go:123] Gathering logs for kube-proxy [c7f0148e52e4924e35347f63207cf72eee31c760c186643e54000ca0a18006b6] ...
	I1101 09:30:48.975560   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7f0148e52e4924e35347f63207cf72eee31c760c186643e54000ca0a18006b6"
	I1101 09:30:49.022574   36048 logs.go:123] Gathering logs for kube-apiserver [853cf39dd0a48500a7f095970a87b6422244b73af3e001ff3d3cd7b5347125a4] ...
	I1101 09:30:49.022605   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 853cf39dd0a48500a7f095970a87b6422244b73af3e001ff3d3cd7b5347125a4"
	I1101 09:30:49.070667   36048 logs.go:123] Gathering logs for kube-apiserver [223e8bd47a8009d1421af6bceab47d9d178a2c4d9905527eccdc3ff8e8c25ed9] ...
	I1101 09:30:49.070702   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 223e8bd47a8009d1421af6bceab47d9d178a2c4d9905527eccdc3ff8e8c25ed9"
	W1101 09:30:49.113734   36048 logs.go:130] failed kube-apiserver [223e8bd47a8009d1421af6bceab47d9d178a2c4d9905527eccdc3ff8e8c25ed9]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 223e8bd47a8009d1421af6bceab47d9d178a2c4d9905527eccdc3ff8e8c25ed9" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 223e8bd47a8009d1421af6bceab47d9d178a2c4d9905527eccdc3ff8e8c25ed9": Process exited with status 1
	stdout:
	
	stderr:
	E1101 09:30:49.103774    4290 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"223e8bd47a8009d1421af6bceab47d9d178a2c4d9905527eccdc3ff8e8c25ed9\": container with ID starting with 223e8bd47a8009d1421af6bceab47d9d178a2c4d9905527eccdc3ff8e8c25ed9 not found: ID does not exist" containerID="223e8bd47a8009d1421af6bceab47d9d178a2c4d9905527eccdc3ff8e8c25ed9"
	time="2025-11-01T09:30:49Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"223e8bd47a8009d1421af6bceab47d9d178a2c4d9905527eccdc3ff8e8c25ed9\": container with ID starting with 223e8bd47a8009d1421af6bceab47d9d178a2c4d9905527eccdc3ff8e8c25ed9 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1101 09:30:49.103774    4290 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"223e8bd47a8009d1421af6bceab47d9d178a2c4d9905527eccdc3ff8e8c25ed9\": container with ID starting with 223e8bd47a8009d1421af6bceab47d9d178a2c4d9905527eccdc3ff8e8c25ed9 not found: ID does not exist" containerID="223e8bd47a8009d1421af6bceab47d9d178a2c4d9905527eccdc3ff8e8c25ed9"
	time="2025-11-01T09:30:49Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"223e8bd47a8009d1421af6bceab47d9d178a2c4d9905527eccdc3ff8e8c25ed9\": container with ID starting with 223e8bd47a8009d1421af6bceab47d9d178a2c4d9905527eccdc3ff8e8c25ed9 not found: ID does not exist"
	
	** /stderr **
	I1101 09:30:49.113755   36048 logs.go:123] Gathering logs for kube-controller-manager [936fd2ed292ea158a66e321e5e5c589755deca7d69ab01e0bbea8be585b766ea] ...
	I1101 09:30:49.113768   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 936fd2ed292ea158a66e321e5e5c589755deca7d69ab01e0bbea8be585b766ea"
	I1101 09:30:49.173991   36048 logs.go:123] Gathering logs for storage-provisioner [705a325c05b6c0085e7ff8b3e258e47d9fe8412452bed3638031e0f6c545de9d] ...
	I1101 09:30:49.174034   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 705a325c05b6c0085e7ff8b3e258e47d9fe8412452bed3638031e0f6c545de9d"
	I1101 09:30:49.216102   36048 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:30:49.216134   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1101 09:30:47.160551   40325 pod_ready.go:104] pod "etcd-pause-855890" is not "Ready", error: <nil>
	W1101 09:30:49.161830   40325 pod_ready.go:104] pod "etcd-pause-855890" is not "Ready", error: <nil>
	I1101 09:30:47.194506   40614 main.go:143] libmachine: domain cert-options-414547 has defined MAC address 52:54:00:1f:e9:df in network mk-cert-options-414547
	I1101 09:30:47.195237   40614 main.go:143] libmachine: no network interface addresses found for domain cert-options-414547 (source=lease)
	I1101 09:30:47.195246   40614 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:30:47.195666   40614 main.go:143] libmachine: unable to find current IP address of domain cert-options-414547 in network mk-cert-options-414547 (interfaces detected: [])
	I1101 09:30:47.195694   40614 retry.go:31] will retry after 910.702815ms: waiting for domain to come up
	I1101 09:30:48.107847   40614 main.go:143] libmachine: domain cert-options-414547 has defined MAC address 52:54:00:1f:e9:df in network mk-cert-options-414547
	I1101 09:30:48.108510   40614 main.go:143] libmachine: no network interface addresses found for domain cert-options-414547 (source=lease)
	I1101 09:30:48.108517   40614 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:30:48.108783   40614 main.go:143] libmachine: unable to find current IP address of domain cert-options-414547 in network mk-cert-options-414547 (interfaces detected: [])
	I1101 09:30:48.108807   40614 retry.go:31] will retry after 1.132074356s: waiting for domain to come up
	I1101 09:30:49.242573   40614 main.go:143] libmachine: domain cert-options-414547 has defined MAC address 52:54:00:1f:e9:df in network mk-cert-options-414547
	I1101 09:30:49.243368   40614 main.go:143] libmachine: no network interface addresses found for domain cert-options-414547 (source=lease)
	I1101 09:30:49.243379   40614 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:30:49.243746   40614 main.go:143] libmachine: unable to find current IP address of domain cert-options-414547 in network mk-cert-options-414547 (interfaces detected: [])
	I1101 09:30:49.243795   40614 retry.go:31] will retry after 1.592675697s: waiting for domain to come up
	I1101 09:30:50.837893   40614 main.go:143] libmachine: domain cert-options-414547 has defined MAC address 52:54:00:1f:e9:df in network mk-cert-options-414547
	I1101 09:30:50.838604   40614 main.go:143] libmachine: no network interface addresses found for domain cert-options-414547 (source=lease)
	I1101 09:30:50.838613   40614 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:30:50.838934   40614 main.go:143] libmachine: unable to find current IP address of domain cert-options-414547 in network mk-cert-options-414547 (interfaces detected: [])
	I1101 09:30:50.838959   40614 retry.go:31] will retry after 1.669106824s: waiting for domain to come up
	I1101 09:30:49.615621   36048 logs.go:123] Gathering logs for container status ...
	I1101 09:30:49.615657   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 09:30:49.669953   36048 logs.go:123] Gathering logs for kubelet ...
	I1101 09:30:49.669981   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 09:30:49.769907   36048 logs.go:123] Gathering logs for dmesg ...
	I1101 09:30:49.769956   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:30:52.289438   36048 api_server.go:253] Checking apiserver healthz at https://192.168.39.77:8443/healthz ...
	I1101 09:30:52.290253   36048 api_server.go:269] stopped: https://192.168.39.77:8443/healthz: Get "https://192.168.39.77:8443/healthz": dial tcp 192.168.39.77:8443: connect: connection refused
	I1101 09:30:52.290320   36048 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1101 09:30:52.290385   36048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1101 09:30:52.341979   36048 cri.go:89] found id: "853cf39dd0a48500a7f095970a87b6422244b73af3e001ff3d3cd7b5347125a4"
	I1101 09:30:52.342012   36048 cri.go:89] found id: ""
	I1101 09:30:52.342021   36048 logs.go:282] 1 containers: [853cf39dd0a48500a7f095970a87b6422244b73af3e001ff3d3cd7b5347125a4]
	I1101 09:30:52.342100   36048 ssh_runner.go:195] Run: which crictl
	I1101 09:30:52.348841   36048 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1101 09:30:52.348934   36048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1101 09:30:52.391958   36048 cri.go:89] found id: "f9dd7881e6bb63603f4ed3e4da6c514d13e60e1ead5871d13f37ae6ff8ec3f92"
	I1101 09:30:52.391988   36048 cri.go:89] found id: ""
	I1101 09:30:52.391999   36048 logs.go:282] 1 containers: [f9dd7881e6bb63603f4ed3e4da6c514d13e60e1ead5871d13f37ae6ff8ec3f92]
	I1101 09:30:52.392073   36048 ssh_runner.go:195] Run: which crictl
	I1101 09:30:52.397045   36048 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1101 09:30:52.397144   36048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1101 09:30:52.452514   36048 cri.go:89] found id: "829a7049ca8686e23aa5a0266e3f8e6b74f1e010bd31bb273affd0bf84d915be"
	I1101 09:30:52.452544   36048 cri.go:89] found id: ""
	I1101 09:30:52.452555   36048 logs.go:282] 1 containers: [829a7049ca8686e23aa5a0266e3f8e6b74f1e010bd31bb273affd0bf84d915be]
	I1101 09:30:52.452617   36048 ssh_runner.go:195] Run: which crictl
	I1101 09:30:52.457241   36048 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1101 09:30:52.457338   36048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1101 09:30:52.506045   36048 cri.go:89] found id: "a8d9bf8913ecbdb49a1e5d6e4c23fecc667057073231a06fe375be2a99eb0634"
	I1101 09:30:52.506071   36048 cri.go:89] found id: "4a827f9d08c79b703503f3602ae37b95dc8f688f3ef925017a1e492167c41558"
	I1101 09:30:52.506077   36048 cri.go:89] found id: ""
	I1101 09:30:52.506098   36048 logs.go:282] 2 containers: [a8d9bf8913ecbdb49a1e5d6e4c23fecc667057073231a06fe375be2a99eb0634 4a827f9d08c79b703503f3602ae37b95dc8f688f3ef925017a1e492167c41558]
	I1101 09:30:52.506165   36048 ssh_runner.go:195] Run: which crictl
	I1101 09:30:52.511265   36048 ssh_runner.go:195] Run: which crictl
	I1101 09:30:52.515940   36048 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1101 09:30:52.516021   36048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1101 09:30:52.559510   36048 cri.go:89] found id: "c7f0148e52e4924e35347f63207cf72eee31c760c186643e54000ca0a18006b6"
	I1101 09:30:52.559536   36048 cri.go:89] found id: ""
	I1101 09:30:52.559548   36048 logs.go:282] 1 containers: [c7f0148e52e4924e35347f63207cf72eee31c760c186643e54000ca0a18006b6]
	I1101 09:30:52.559618   36048 ssh_runner.go:195] Run: which crictl
	I1101 09:30:52.564486   36048 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1101 09:30:52.564567   36048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1101 09:30:52.614947   36048 cri.go:89] found id: "936fd2ed292ea158a66e321e5e5c589755deca7d69ab01e0bbea8be585b766ea"
	I1101 09:30:52.614978   36048 cri.go:89] found id: ""
	I1101 09:30:52.614989   36048 logs.go:282] 1 containers: [936fd2ed292ea158a66e321e5e5c589755deca7d69ab01e0bbea8be585b766ea]
	I1101 09:30:52.615057   36048 ssh_runner.go:195] Run: which crictl
	I1101 09:30:52.620382   36048 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1101 09:30:52.620474   36048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1101 09:30:52.668290   36048 cri.go:89] found id: ""
	I1101 09:30:52.668320   36048 logs.go:282] 0 containers: []
	W1101 09:30:52.668331   36048 logs.go:284] No container was found matching "kindnet"
	I1101 09:30:52.668341   36048 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1101 09:30:52.668413   36048 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1101 09:30:52.713999   36048 cri.go:89] found id: "705a325c05b6c0085e7ff8b3e258e47d9fe8412452bed3638031e0f6c545de9d"
	I1101 09:30:52.714024   36048 cri.go:89] found id: ""
	I1101 09:30:52.714033   36048 logs.go:282] 1 containers: [705a325c05b6c0085e7ff8b3e258e47d9fe8412452bed3638031e0f6c545de9d]
	I1101 09:30:52.714095   36048 ssh_runner.go:195] Run: which crictl
	I1101 09:30:52.719682   36048 logs.go:123] Gathering logs for etcd [f9dd7881e6bb63603f4ed3e4da6c514d13e60e1ead5871d13f37ae6ff8ec3f92] ...
	I1101 09:30:52.719705   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9dd7881e6bb63603f4ed3e4da6c514d13e60e1ead5871d13f37ae6ff8ec3f92"
	I1101 09:30:52.787556   36048 logs.go:123] Gathering logs for coredns [829a7049ca8686e23aa5a0266e3f8e6b74f1e010bd31bb273affd0bf84d915be] ...
	I1101 09:30:52.787597   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 829a7049ca8686e23aa5a0266e3f8e6b74f1e010bd31bb273affd0bf84d915be"
	I1101 09:30:52.842542   36048 logs.go:123] Gathering logs for kube-scheduler [a8d9bf8913ecbdb49a1e5d6e4c23fecc667057073231a06fe375be2a99eb0634] ...
	I1101 09:30:52.842577   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8d9bf8913ecbdb49a1e5d6e4c23fecc667057073231a06fe375be2a99eb0634"
	I1101 09:30:52.906147   36048 logs.go:123] Gathering logs for kube-scheduler [4a827f9d08c79b703503f3602ae37b95dc8f688f3ef925017a1e492167c41558] ...
	I1101 09:30:52.906184   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4a827f9d08c79b703503f3602ae37b95dc8f688f3ef925017a1e492167c41558"
	I1101 09:30:52.959353   36048 logs.go:123] Gathering logs for CRI-O ...
	I1101 09:30:52.959387   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1101 09:30:53.343493   36048 logs.go:123] Gathering logs for dmesg ...
	I1101 09:30:53.343529   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1101 09:30:53.364422   36048 logs.go:123] Gathering logs for kube-proxy [c7f0148e52e4924e35347f63207cf72eee31c760c186643e54000ca0a18006b6] ...
	I1101 09:30:53.364452   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7f0148e52e4924e35347f63207cf72eee31c760c186643e54000ca0a18006b6"
	I1101 09:30:53.413696   36048 logs.go:123] Gathering logs for kube-controller-manager [936fd2ed292ea158a66e321e5e5c589755deca7d69ab01e0bbea8be585b766ea] ...
	I1101 09:30:53.413727   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 936fd2ed292ea158a66e321e5e5c589755deca7d69ab01e0bbea8be585b766ea"
	I1101 09:30:53.474763   36048 logs.go:123] Gathering logs for storage-provisioner [705a325c05b6c0085e7ff8b3e258e47d9fe8412452bed3638031e0f6c545de9d] ...
	I1101 09:30:53.474801   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 705a325c05b6c0085e7ff8b3e258e47d9fe8412452bed3638031e0f6c545de9d"
	I1101 09:30:53.529352   36048 logs.go:123] Gathering logs for container status ...
	I1101 09:30:53.529386   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1101 09:30:53.588413   36048 logs.go:123] Gathering logs for kubelet ...
	I1101 09:30:53.588442   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1101 09:30:53.683444   36048 logs.go:123] Gathering logs for describe nodes ...
	I1101 09:30:53.683492   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1101 09:30:53.764520   36048 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1101 09:30:53.764550   36048 logs.go:123] Gathering logs for kube-apiserver [853cf39dd0a48500a7f095970a87b6422244b73af3e001ff3d3cd7b5347125a4] ...
	I1101 09:30:53.764567   36048 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 853cf39dd0a48500a7f095970a87b6422244b73af3e001ff3d3cd7b5347125a4"
	W1101 09:30:51.660037   40325 pod_ready.go:104] pod "etcd-pause-855890" is not "Ready", error: <nil>
	I1101 09:30:53.161330   40325 pod_ready.go:94] pod "etcd-pause-855890" is "Ready"
	I1101 09:30:53.161371   40325 pod_ready.go:86] duration metric: took 10.008019678s for pod "etcd-pause-855890" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:30:53.165392   40325 pod_ready.go:83] waiting for pod "kube-apiserver-pause-855890" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:30:53.172466   40325 pod_ready.go:94] pod "kube-apiserver-pause-855890" is "Ready"
	I1101 09:30:53.172495   40325 pod_ready.go:86] duration metric: took 7.072878ms for pod "kube-apiserver-pause-855890" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:30:53.175701   40325 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-855890" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:30:53.183604   40325 pod_ready.go:94] pod "kube-controller-manager-pause-855890" is "Ready"
	I1101 09:30:53.183633   40325 pod_ready.go:86] duration metric: took 7.905744ms for pod "kube-controller-manager-pause-855890" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:30:53.187754   40325 pod_ready.go:83] waiting for pod "kube-proxy-9dngv" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:30:53.358030   40325 pod_ready.go:94] pod "kube-proxy-9dngv" is "Ready"
	I1101 09:30:53.358076   40325 pod_ready.go:86] duration metric: took 170.284752ms for pod "kube-proxy-9dngv" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:30:53.557973   40325 pod_ready.go:83] waiting for pod "kube-scheduler-pause-855890" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:30:55.564590   40325 pod_ready.go:94] pod "kube-scheduler-pause-855890" is "Ready"
	I1101 09:30:55.564618   40325 pod_ready.go:86] duration metric: took 2.006613741s for pod "kube-scheduler-pause-855890" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:30:55.564630   40325 pod_ready.go:40] duration metric: took 12.422741907s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:30:55.609099   40325 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 09:30:55.610975   40325 out.go:179] * Done! kubectl is now configured to use "pause-855890" cluster and "default" namespace by default
	I1101 09:30:52.510171   40614 main.go:143] libmachine: domain cert-options-414547 has defined MAC address 52:54:00:1f:e9:df in network mk-cert-options-414547
	I1101 09:30:52.511098   40614 main.go:143] libmachine: no network interface addresses found for domain cert-options-414547 (source=lease)
	I1101 09:30:52.511109   40614 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:30:52.511635   40614 main.go:143] libmachine: unable to find current IP address of domain cert-options-414547 in network mk-cert-options-414547 (interfaces detected: [])
	I1101 09:30:52.511666   40614 retry.go:31] will retry after 2.383700857s: waiting for domain to come up
	I1101 09:30:54.898275   40614 main.go:143] libmachine: domain cert-options-414547 has defined MAC address 52:54:00:1f:e9:df in network mk-cert-options-414547
	I1101 09:30:54.898897   40614 main.go:143] libmachine: no network interface addresses found for domain cert-options-414547 (source=lease)
	I1101 09:30:54.898907   40614 main.go:143] libmachine: trying to list again with source=arp
	I1101 09:30:54.899261   40614 main.go:143] libmachine: unable to find current IP address of domain cert-options-414547 in network mk-cert-options-414547 (interfaces detected: [])
	I1101 09:30:54.899287   40614 retry.go:31] will retry after 2.961937452s: waiting for domain to come up
	
	
	==> CRI-O <==
	Nov 01 09:30:58 pause-855890 crio[2782]: time="2025-11-01 09:30:58.294900961Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761989458294877749,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=557ebf15-e33d-4b6f-9896-4b70c403eb24 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:30:58 pause-855890 crio[2782]: time="2025-11-01 09:30:58.295561620Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ada97884-8db5-4b0e-a943-0ff2f58f554d name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:30:58 pause-855890 crio[2782]: time="2025-11-01 09:30:58.295619634Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ada97884-8db5-4b0e-a943-0ff2f58f554d name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:30:58 pause-855890 crio[2782]: time="2025-11-01 09:30:58.295859042Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b05a9cd351437e0fa07813290ba4db602fb5fd329e87fef8dac936cb25fbbdc5,PodSandboxId:c2bc92475d1593ddf519c895c522f57d80caf69d9da18a7c1066078044d847f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761989441575435230,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-czz5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0464c9e4-46a6-477e-94b6-fed9a6eb2966,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8842af31785cd52ebbdafa0959eae8cea4aee7dfa504f74935e64f90709ff576,PodSandboxId:7a37e79d3e8d4c0ef35a83f051f7b117d160fb95b07e731e83b50a37d14e7f60,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761989440850021913,Labels
:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91fefe28619d5b5883bc05f672528776,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa8d5c2ef0eef267589224de026d832239ec04df5fee97d3944f096c2351184,PodSandboxId:adf90291a5f3d0985ab7c580a2e1beae78e98f9c1a0ab3417500f9d17663d454,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc2517
2553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761989435967848374,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dngv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e9d2ed-3e06-4b92-b71e-0d3520d7d64b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71b0ac0166bc29fa2a4344cda6805cc8ba9089589fa84ee6e2de3bd9a26d8960,PodSandboxId:87b0274ceb976b02f4f6978ba9828b00a55cdb32f959f141e3e82d9270de8d27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108
c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761989435814747200,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853c13719832eff05dd623c8fa9733f9,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f695868c1e5050f1140aca569a60bff1c122e7424221706218b843055d4559d,PodSandboxId:4c3d559ba5fa01bc3aac60801a64bc359f8ae76726193fc6c348fbc642ffd2dc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761989435782632619,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 558b6f54d67163787c5e1ecfe5ab3e8f,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c13e07694a27604a7283ecab14599ca94c057df8e41b2c9ed9bb8fb8b083292,PodSandboxId:97776589cf2da870fec960d9f5790315b7dd3d29fa672414d44ee224ffc83a5e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd
6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761989435747651560,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 754a220d6e4e8f43883323d60c8e9daa,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e7af9f11cce68fead72b9ad0539401f7ebcdafd692c4284566fa26af1d67752,PodSandboxId:7a37e79d3e8d4c0ef35a83f051f7b117d160fb95b07e73
1e83b50a37d14e7f60,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761989435687681651,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91fefe28619d5b5883bc05f672528776,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:317a9379f524cbb808decb995783218cc7254c0808bd3cdbc03b94f7e63149ea,PodSandboxId:49c584770c098c55d794e0d0422cae69111ee9cc441aa54562e8a8a6a29542d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761989370726875613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-czz5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0464c9e4-46a6-477e-94b6-fed9a6eb2966,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\"
,\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9ddabcf5ff167d0756b588b55f5f1adea8970f8165ccf6a8500ebbc7da0692d,PodSandboxId:3be846a37f831be91223346e485bf9483fb3085e5cba6ec310c004d56a28d2c6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761989370042559423,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dngv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e9d2ed-3e06-4b92-b
71e-0d3520d7d64b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08997e7db2aca95d5ecb59b8e74931e0088d93a536271a5655c21a88adf47b67,PodSandboxId:43314a6aa6707c482ada8cf19c54fa3f0cfabbf57e068ef121b7a3bdf613b6f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761989357561676152,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 558b6f54d67163787c5e1ecfe5ab3e8f,},Annotations:map[string]string{
io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe4e98bd2bc2d934f1cf0670af24bc144ea2818c8501b0956e2a6840cec5315b,PodSandboxId:0ee04d640f242b1aa4baed65c7026f00ff1865df3a005bb54e2f998520f2881e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761989357544814666,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-855890,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 754a220d6e4e8f43883323d60c8e9daa,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fecfbd390ded36fbce68790e962191e98ce02780cad561cb62f869a589724115,PodSandboxId:7631e8b0c6a1306cbd41fc71db442dd5cac3573abbdc7ae08758d607584b5de3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761989357502177160,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853c13719832eff05dd623c8fa9733f9,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ada97884-8db5-4b0e-a943-0ff2f58f554d name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:30:58 pause-855890 crio[2782]: time="2025-11-01 09:30:58.348264402Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4013735c-2f33-4993-a736-6ab290c21968 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:30:58 pause-855890 crio[2782]: time="2025-11-01 09:30:58.348469392Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4013735c-2f33-4993-a736-6ab290c21968 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:30:58 pause-855890 crio[2782]: time="2025-11-01 09:30:58.350498010Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ff019049-cf1f-4ff2-b7fb-16ffd36f65b7 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:30:58 pause-855890 crio[2782]: time="2025-11-01 09:30:58.351331298Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761989458351305998,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ff019049-cf1f-4ff2-b7fb-16ffd36f65b7 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:30:58 pause-855890 crio[2782]: time="2025-11-01 09:30:58.352442293Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c8f19b75-c404-4f33-87e7-c42c3dd861a6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:30:58 pause-855890 crio[2782]: time="2025-11-01 09:30:58.352541288Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c8f19b75-c404-4f33-87e7-c42c3dd861a6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:30:58 pause-855890 crio[2782]: time="2025-11-01 09:30:58.352915807Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b05a9cd351437e0fa07813290ba4db602fb5fd329e87fef8dac936cb25fbbdc5,PodSandboxId:c2bc92475d1593ddf519c895c522f57d80caf69d9da18a7c1066078044d847f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761989441575435230,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-czz5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0464c9e4-46a6-477e-94b6-fed9a6eb2966,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8842af31785cd52ebbdafa0959eae8cea4aee7dfa504f74935e64f90709ff576,PodSandboxId:7a37e79d3e8d4c0ef35a83f051f7b117d160fb95b07e731e83b50a37d14e7f60,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761989440850021913,Labels
:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91fefe28619d5b5883bc05f672528776,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa8d5c2ef0eef267589224de026d832239ec04df5fee97d3944f096c2351184,PodSandboxId:adf90291a5f3d0985ab7c580a2e1beae78e98f9c1a0ab3417500f9d17663d454,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc2517
2553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761989435967848374,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dngv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e9d2ed-3e06-4b92-b71e-0d3520d7d64b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71b0ac0166bc29fa2a4344cda6805cc8ba9089589fa84ee6e2de3bd9a26d8960,PodSandboxId:87b0274ceb976b02f4f6978ba9828b00a55cdb32f959f141e3e82d9270de8d27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108
c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761989435814747200,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853c13719832eff05dd623c8fa9733f9,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f695868c1e5050f1140aca569a60bff1c122e7424221706218b843055d4559d,PodSandboxId:4c3d559ba5fa01bc3aac60801a64bc359f8ae76726193fc6c348fbc642ffd2dc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761989435782632619,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 558b6f54d67163787c5e1ecfe5ab3e8f,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c13e07694a27604a7283ecab14599ca94c057df8e41b2c9ed9bb8fb8b083292,PodSandboxId:97776589cf2da870fec960d9f5790315b7dd3d29fa672414d44ee224ffc83a5e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd
6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761989435747651560,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 754a220d6e4e8f43883323d60c8e9daa,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e7af9f11cce68fead72b9ad0539401f7ebcdafd692c4284566fa26af1d67752,PodSandboxId:7a37e79d3e8d4c0ef35a83f051f7b117d160fb95b07e73
1e83b50a37d14e7f60,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761989435687681651,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91fefe28619d5b5883bc05f672528776,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:317a9379f524cbb808decb995783218cc7254c0808bd3cdbc03b94f7e63149ea,PodSandboxId:49c584770c098c55d794e0d0422cae69111ee9cc441aa54562e8a8a6a29542d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761989370726875613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-czz5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0464c9e4-46a6-477e-94b6-fed9a6eb2966,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\"
,\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9ddabcf5ff167d0756b588b55f5f1adea8970f8165ccf6a8500ebbc7da0692d,PodSandboxId:3be846a37f831be91223346e485bf9483fb3085e5cba6ec310c004d56a28d2c6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761989370042559423,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dngv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e9d2ed-3e06-4b92-b
71e-0d3520d7d64b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08997e7db2aca95d5ecb59b8e74931e0088d93a536271a5655c21a88adf47b67,PodSandboxId:43314a6aa6707c482ada8cf19c54fa3f0cfabbf57e068ef121b7a3bdf613b6f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761989357561676152,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 558b6f54d67163787c5e1ecfe5ab3e8f,},Annotations:map[string]string{
io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe4e98bd2bc2d934f1cf0670af24bc144ea2818c8501b0956e2a6840cec5315b,PodSandboxId:0ee04d640f242b1aa4baed65c7026f00ff1865df3a005bb54e2f998520f2881e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761989357544814666,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-855890,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 754a220d6e4e8f43883323d60c8e9daa,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fecfbd390ded36fbce68790e962191e98ce02780cad561cb62f869a589724115,PodSandboxId:7631e8b0c6a1306cbd41fc71db442dd5cac3573abbdc7ae08758d607584b5de3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761989357502177160,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853c13719832eff05dd623c8fa9733f9,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c8f19b75-c404-4f33-87e7-c42c3dd861a6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:30:58 pause-855890 crio[2782]: time="2025-11-01 09:30:58.401662455Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8664eeb4-156f-4ece-8616-5ab02b8a41b5 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:30:58 pause-855890 crio[2782]: time="2025-11-01 09:30:58.401749305Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8664eeb4-156f-4ece-8616-5ab02b8a41b5 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:30:58 pause-855890 crio[2782]: time="2025-11-01 09:30:58.404709262Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a9996963-3f4c-41b5-bb94-a459bb55baf8 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:30:58 pause-855890 crio[2782]: time="2025-11-01 09:30:58.405097112Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761989458405074779,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a9996963-3f4c-41b5-bb94-a459bb55baf8 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:30:58 pause-855890 crio[2782]: time="2025-11-01 09:30:58.406184431Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1acfdab3-8712-4f99-ae42-e3d155f05666 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:30:58 pause-855890 crio[2782]: time="2025-11-01 09:30:58.406266837Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1acfdab3-8712-4f99-ae42-e3d155f05666 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:30:58 pause-855890 crio[2782]: time="2025-11-01 09:30:58.406656134Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b05a9cd351437e0fa07813290ba4db602fb5fd329e87fef8dac936cb25fbbdc5,PodSandboxId:c2bc92475d1593ddf519c895c522f57d80caf69d9da18a7c1066078044d847f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761989441575435230,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-czz5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0464c9e4-46a6-477e-94b6-fed9a6eb2966,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8842af31785cd52ebbdafa0959eae8cea4aee7dfa504f74935e64f90709ff576,PodSandboxId:7a37e79d3e8d4c0ef35a83f051f7b117d160fb95b07e731e83b50a37d14e7f60,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761989440850021913,Labels
:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91fefe28619d5b5883bc05f672528776,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa8d5c2ef0eef267589224de026d832239ec04df5fee97d3944f096c2351184,PodSandboxId:adf90291a5f3d0985ab7c580a2e1beae78e98f9c1a0ab3417500f9d17663d454,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc2517
2553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761989435967848374,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dngv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e9d2ed-3e06-4b92-b71e-0d3520d7d64b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71b0ac0166bc29fa2a4344cda6805cc8ba9089589fa84ee6e2de3bd9a26d8960,PodSandboxId:87b0274ceb976b02f4f6978ba9828b00a55cdb32f959f141e3e82d9270de8d27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108
c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761989435814747200,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853c13719832eff05dd623c8fa9733f9,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f695868c1e5050f1140aca569a60bff1c122e7424221706218b843055d4559d,PodSandboxId:4c3d559ba5fa01bc3aac60801a64bc359f8ae76726193fc6c348fbc642ffd2dc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761989435782632619,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 558b6f54d67163787c5e1ecfe5ab3e8f,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c13e07694a27604a7283ecab14599ca94c057df8e41b2c9ed9bb8fb8b083292,PodSandboxId:97776589cf2da870fec960d9f5790315b7dd3d29fa672414d44ee224ffc83a5e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd
6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761989435747651560,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 754a220d6e4e8f43883323d60c8e9daa,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e7af9f11cce68fead72b9ad0539401f7ebcdafd692c4284566fa26af1d67752,PodSandboxId:7a37e79d3e8d4c0ef35a83f051f7b117d160fb95b07e73
1e83b50a37d14e7f60,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761989435687681651,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91fefe28619d5b5883bc05f672528776,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:317a9379f524cbb808decb995783218cc7254c0808bd3cdbc03b94f7e63149ea,PodSandboxId:49c584770c098c55d794e0d0422cae69111ee9cc441aa54562e8a8a6a29542d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761989370726875613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-czz5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0464c9e4-46a6-477e-94b6-fed9a6eb2966,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\"
,\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9ddabcf5ff167d0756b588b55f5f1adea8970f8165ccf6a8500ebbc7da0692d,PodSandboxId:3be846a37f831be91223346e485bf9483fb3085e5cba6ec310c004d56a28d2c6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761989370042559423,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dngv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e9d2ed-3e06-4b92-b
71e-0d3520d7d64b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08997e7db2aca95d5ecb59b8e74931e0088d93a536271a5655c21a88adf47b67,PodSandboxId:43314a6aa6707c482ada8cf19c54fa3f0cfabbf57e068ef121b7a3bdf613b6f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761989357561676152,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 558b6f54d67163787c5e1ecfe5ab3e8f,},Annotations:map[string]string{
io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe4e98bd2bc2d934f1cf0670af24bc144ea2818c8501b0956e2a6840cec5315b,PodSandboxId:0ee04d640f242b1aa4baed65c7026f00ff1865df3a005bb54e2f998520f2881e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761989357544814666,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-855890,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 754a220d6e4e8f43883323d60c8e9daa,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fecfbd390ded36fbce68790e962191e98ce02780cad561cb62f869a589724115,PodSandboxId:7631e8b0c6a1306cbd41fc71db442dd5cac3573abbdc7ae08758d607584b5de3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761989357502177160,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853c13719832eff05dd623c8fa9733f9,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1acfdab3-8712-4f99-ae42-e3d155f05666 name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:30:58 pause-855890 crio[2782]: time="2025-11-01 09:30:58.456575787Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0d7c72d3-2774-4a3d-8877-22e75fcf4ec8 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:30:58 pause-855890 crio[2782]: time="2025-11-01 09:30:58.456702701Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0d7c72d3-2774-4a3d-8877-22e75fcf4ec8 name=/runtime.v1.RuntimeService/Version
	Nov 01 09:30:58 pause-855890 crio[2782]: time="2025-11-01 09:30:58.458047298Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a0f57cc7-e53c-4968-8fe7-21f2d14f185c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:30:58 pause-855890 crio[2782]: time="2025-11-01 09:30:58.458518482Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1761989458458494175,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:127412,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a0f57cc7-e53c-4968-8fe7-21f2d14f185c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 01 09:30:58 pause-855890 crio[2782]: time="2025-11-01 09:30:58.459114319Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=636645f1-879e-44ce-b8fc-69784fcc91fc name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:30:58 pause-855890 crio[2782]: time="2025-11-01 09:30:58.459171174Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=636645f1-879e-44ce-b8fc-69784fcc91fc name=/runtime.v1.RuntimeService/ListContainers
	Nov 01 09:30:58 pause-855890 crio[2782]: time="2025-11-01 09:30:58.459558637Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b05a9cd351437e0fa07813290ba4db602fb5fd329e87fef8dac936cb25fbbdc5,PodSandboxId:c2bc92475d1593ddf519c895c522f57d80caf69d9da18a7c1066078044d847f7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_RUNNING,CreatedAt:1761989441575435230,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-czz5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0464c9e4-46a6-477e-94b6-fed9a6eb2966,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\",\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8842af31785cd52ebbdafa0959eae8cea4aee7dfa504f74935e64f90709ff576,PodSandboxId:7a37e79d3e8d4c0ef35a83f051f7b117d160fb95b07e731e83b50a37d14e7f60,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_RUNNING,CreatedAt:1761989440850021913,Labels
:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91fefe28619d5b5883bc05f672528776,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1aa8d5c2ef0eef267589224de026d832239ec04df5fee97d3944f096c2351184,PodSandboxId:adf90291a5f3d0985ab7c580a2e1beae78e98f9c1a0ab3417500f9d17663d454,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc2517
2553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_RUNNING,CreatedAt:1761989435967848374,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dngv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e9d2ed-3e06-4b92-b71e-0d3520d7d64b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71b0ac0166bc29fa2a4344cda6805cc8ba9089589fa84ee6e2de3bd9a26d8960,PodSandboxId:87b0274ceb976b02f4f6978ba9828b00a55cdb32f959f141e3e82d9270de8d27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108
c21a52d87648e30f3080d9cbadc97,State:CONTAINER_RUNNING,CreatedAt:1761989435814747200,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853c13719832eff05dd623c8fa9733f9,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f695868c1e5050f1140aca569a60bff1c122e7424221706218b843055d4559d,PodSandboxId:4c3d559ba5fa01bc3aac60801a64bc359f8ae76726193fc6c348fbc642ffd2dc,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_RUNNING,CreatedAt:1761989435782632619,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 558b6f54d67163787c5e1ecfe5ab3e8f,},Annotations:map[string]string{io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c13e07694a27604a7283ecab14599ca94c057df8e41b2c9ed9bb8fb8b083292,PodSandboxId:97776589cf2da870fec960d9f5790315b7dd3d29fa672414d44ee224ffc83a5e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:7dd
6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_RUNNING,CreatedAt:1761989435747651560,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 754a220d6e4e8f43883323d60c8e9daa,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e7af9f11cce68fead72b9ad0539401f7ebcdafd692c4284566fa26af1d67752,PodSandboxId:7a37e79d3e8d4c0ef35a83f051f7b117d160fb95b07e73
1e83b50a37d14e7f60,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f,State:CONTAINER_EXITED,CreatedAt:1761989435687681651,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91fefe28619d5b5883bc05f672528776,},Annotations:map[string]string{io.kubernetes.container.hash: 9c112505,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Contai
ner{Id:317a9379f524cbb808decb995783218cc7254c0808bd3cdbc03b94f7e63149ea,PodSandboxId:49c584770c098c55d794e0d0422cae69111ee9cc441aa54562e8a8a6a29542d1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969,State:CONTAINER_EXITED,CreatedAt:1761989370726875613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bc5c9577-czz5l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0464c9e4-46a6-477e-94b6-fed9a6eb2966,},Annotations:map[string]string{io.kubernetes.container.hash: e9bf792,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"},{\"name\":\"liveness-probe\"
,\"containerPort\":8080,\"protocol\":\"TCP\"},{\"name\":\"readiness-probe\",\"containerPort\":8181,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9ddabcf5ff167d0756b588b55f5f1adea8970f8165ccf6a8500ebbc7da0692d,PodSandboxId:3be846a37f831be91223346e485bf9483fb3085e5cba6ec310c004d56a28d2c6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7,State:CONTAINER_EXITED,CreatedAt:1761989370042559423,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9dngv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74e9d2ed-3e06-4b92-b
71e-0d3520d7d64b,},Annotations:map[string]string{io.kubernetes.container.hash: 96651ac1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08997e7db2aca95d5ecb59b8e74931e0088d93a536271a5655c21a88adf47b67,PodSandboxId:43314a6aa6707c482ada8cf19c54fa3f0cfabbf57e068ef121b7a3bdf613b6f5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115,State:CONTAINER_EXITED,CreatedAt:1761989357561676152,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 558b6f54d67163787c5e1ecfe5ab3e8f,},Annotations:map[string]string{
io.kubernetes.container.hash: e9e20c65,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe4e98bd2bc2d934f1cf0670af24bc144ea2818c8501b0956e2a6840cec5315b,PodSandboxId:0ee04d640f242b1aa4baed65c7026f00ff1865df3a005bb54e2f998520f2881e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813,State:CONTAINER_EXITED,CreatedAt:1761989357544814666,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-855890,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: 754a220d6e4e8f43883323d60c8e9daa,},Annotations:map[string]string{io.kubernetes.container.hash: af42bbeb,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fecfbd390ded36fbce68790e962191e98ce02780cad561cb62f869a589724115,PodSandboxId:7631e8b0c6a1306cbd41fc71db442dd5cac3573abbdc7ae08758d607584b5de3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97,State:CONTAINER_EXITED,CreatedAt:1761989357502177160,Labels:map[string]string{io.kubernetes.contai
ner.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-855890,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 853c13719832eff05dd623c8fa9733f9,},Annotations:map[string]string{io.kubernetes.container.hash: d0cc63c7,io.kubernetes.container.ports: [{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=636645f1-879e-44ce-b8fc-69784fcc91fc name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	b05a9cd351437       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   16 seconds ago       Running             coredns                   1                   c2bc92475d159       coredns-66bc5c9577-czz5l
	8842af31785cd       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   17 seconds ago       Running             kube-controller-manager   2                   7a37e79d3e8d4       kube-controller-manager-pause-855890
	1aa8d5c2ef0ee       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   22 seconds ago       Running             kube-proxy                1                   adf90291a5f3d       kube-proxy-9dngv
	71b0ac0166bc2       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   22 seconds ago       Running             kube-apiserver            1                   87b0274ceb976       kube-apiserver-pause-855890
	2f695868c1e50       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   22 seconds ago       Running             etcd                      1                   4c3d559ba5fa0       etcd-pause-855890
	2c13e07694a27       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   22 seconds ago       Running             kube-scheduler            1                   97776589cf2da       kube-scheduler-pause-855890
	3e7af9f11cce6       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   22 seconds ago       Exited              kube-controller-manager   1                   7a37e79d3e8d4       kube-controller-manager-pause-855890
	317a9379f524c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   About a minute ago   Exited              coredns                   0                   49c584770c098       coredns-66bc5c9577-czz5l
	c9ddabcf5ff16       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   About a minute ago   Exited              kube-proxy                0                   3be846a37f831       kube-proxy-9dngv
	08997e7db2aca       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   About a minute ago   Exited              etcd                      0                   43314a6aa6707       etcd-pause-855890
	fe4e98bd2bc2d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   About a minute ago   Exited              kube-scheduler            0                   0ee04d640f242       kube-scheduler-pause-855890
	fecfbd390ded3       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   About a minute ago   Exited              kube-apiserver            0                   7631e8b0c6a13       kube-apiserver-pause-855890
	
	
	==> coredns [317a9379f524cbb808decb995783218cc7254c0808bd3cdbc03b94f7e63149ea] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	[INFO] Reloading complete
	[INFO] 127.0.0.1:42025 - 56170 "HINFO IN 151783329721779275.2056587546879306900. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.070216253s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b05a9cd351437e0fa07813290ba4db602fb5fd329e87fef8dac936cb25fbbdc5] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59063 - 40051 "HINFO IN 1248045618366012909.6732322449841863943. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.0823087s
	
	
	==> describe nodes <==
	Name:               pause-855890
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-855890
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21e20c7776311c6e29254646bf2620ea610dd192
	                    minikube.k8s.io/name=pause-855890
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_29_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:29:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-855890
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:30:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:30:40 +0000   Sat, 01 Nov 2025 09:29:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:30:40 +0000   Sat, 01 Nov 2025 09:29:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:30:40 +0000   Sat, 01 Nov 2025 09:29:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:30:40 +0000   Sat, 01 Nov 2025 09:29:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.183
	  Hostname:    pause-855890
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035908Ki
	  pods:               110
	System Info:
	  Machine ID:                 c8263723d57c466d93e7b8644cf81691
	  System UUID:                c8263723-d57c-466d-93e7-b8644cf81691
	  Boot ID:                    52811c4a-8c08-496b-893c-6f968120d3e5
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-czz5l                100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     89s
	  kube-system                 etcd-pause-855890                       100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         94s
	  kube-system                 kube-apiserver-pause-855890             250m (12%)    0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 kube-controller-manager-pause-855890    200m (10%)    0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 kube-proxy-9dngv                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-scheduler-pause-855890             100m (5%)     0 (0%)      0 (0%)           0 (0%)         95s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (5%)  170Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 87s                kube-proxy       
	  Normal  Starting                 16s                kube-proxy       
	  Normal  Starting                 95s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  95s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     94s                kubelet          Node pause-855890 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    94s                kubelet          Node pause-855890 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                94s                kubelet          Node pause-855890 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  94s                kubelet          Node pause-855890 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           90s                node-controller  Node pause-855890 event: Registered Node pause-855890 in Controller
	  Normal  Starting                 19s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19s (x8 over 19s)  kubelet          Node pause-855890 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19s (x8 over 19s)  kubelet          Node pause-855890 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19s (x7 over 19s)  kubelet          Node pause-855890 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14s                node-controller  Node pause-855890 event: Registered Node pause-855890 in Controller
	
	
	==> dmesg <==
	[Nov 1 09:28] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000090] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.001082] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[Nov 1 09:29] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000024] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.085904] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.115930] kauditd_printk_skb: 74 callbacks suppressed
	[  +0.103846] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.172456] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.115208] kauditd_printk_skb: 18 callbacks suppressed
	[  +9.383228] kauditd_printk_skb: 222 callbacks suppressed
	[Nov 1 09:30] kauditd_printk_skb: 38 callbacks suppressed
	[  +6.197664] kauditd_printk_skb: 56 callbacks suppressed
	[  +0.614375] kauditd_printk_skb: 260 callbacks suppressed
	[  +1.816193] kauditd_printk_skb: 28 callbacks suppressed
	
	
	==> etcd [08997e7db2aca95d5ecb59b8e74931e0088d93a536271a5655c21a88adf47b67] <==
	{"level":"info","ts":"2025-11-01T09:29:33.366381Z","caller":"traceutil/trace.go:172","msg":"trace[1578796124] transaction","detail":"{read_only:false; response_revision:383; number_of_response:1; }","duration":"134.669075ms","start":"2025-11-01T09:29:33.231645Z","end":"2025-11-01T09:29:33.366314Z","steps":["trace[1578796124] 'process raft request'  (duration: 134.38702ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:29:33.652699Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.786945ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13371919837915374771 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/coredns-66bc5c9577-czz5l.1873d7ffe81e6750\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns-66bc5c9577-czz5l.1873d7ffe81e6750\" value_size:682 lease:4148547801060598596 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-01T09:29:33.652855Z","caller":"traceutil/trace.go:172","msg":"trace[359709472] transaction","detail":"{read_only:false; response_revision:385; number_of_response:1; }","duration":"190.166701ms","start":"2025-11-01T09:29:33.462671Z","end":"2025-11-01T09:29:33.652838Z","steps":["trace[359709472] 'process raft request'  (duration: 62.623508ms)","trace[359709472] 'compare'  (duration: 126.713366ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T09:29:33.860083Z","caller":"traceutil/trace.go:172","msg":"trace[1424517841] linearizableReadLoop","detail":"{readStateIndex:397; appliedIndex:397; }","duration":"106.670498ms","start":"2025-11-01T09:29:33.753397Z","end":"2025-11-01T09:29:33.860068Z","steps":["trace[1424517841] 'read index received'  (duration: 106.666179ms)","trace[1424517841] 'applied index is now lower than readState.Index'  (duration: 3.644µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T09:29:33.861264Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"107.850764ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-czz5l\" limit:1 ","response":"range_response_count:1 size:5630"}
	{"level":"info","ts":"2025-11-01T09:29:33.861324Z","caller":"traceutil/trace.go:172","msg":"trace[351083579] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-czz5l; range_end:; response_count:1; response_revision:385; }","duration":"107.922898ms","start":"2025-11-01T09:29:33.753392Z","end":"2025-11-01T09:29:33.861315Z","steps":["trace[351083579] 'agreement among raft nodes before linearized reading'  (duration: 106.731606ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:29:33.861674Z","caller":"traceutil/trace.go:172","msg":"trace[523220088] transaction","detail":"{read_only:false; response_revision:386; number_of_response:1; }","duration":"203.929384ms","start":"2025-11-01T09:29:33.657613Z","end":"2025-11-01T09:29:33.861543Z","steps":["trace[523220088] 'process raft request'  (duration: 202.507178ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:30:20.555155Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-01T09:30:20.555268Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-855890","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.183:2380"],"advertise-client-urls":["https://192.168.50.183:2379"]}
	{"level":"error","ts":"2025-11-01T09:30:20.555383Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T09:30:20.641543Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T09:30:20.641663Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T09:30:20.641708Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"42038717d2bfb992","current-leader-member-id":"42038717d2bfb992"}
	{"level":"info","ts":"2025-11-01T09:30:20.641791Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-01T09:30:20.641902Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-01T09:30:20.641883Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T09:30:20.641992Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T09:30:20.642002Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-01T09:30:20.642058Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.183:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T09:30:20.642070Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.183:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T09:30:20.642087Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.183:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T09:30:20.649249Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.50.183:2380"}
	{"level":"error","ts":"2025-11-01T09:30:20.649477Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.50.183:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T09:30:20.649537Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.50.183:2380"}
	{"level":"info","ts":"2025-11-01T09:30:20.649599Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-855890","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.183:2380"],"advertise-client-urls":["https://192.168.50.183:2379"]}
	
	
	==> etcd [2f695868c1e5050f1140aca569a60bff1c122e7424221706218b843055d4559d] <==
	{"level":"warn","ts":"2025-11-01T09:30:39.334735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:39.356323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:39.392200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:39.431855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:39.461945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:39.488062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:39.508322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:39.526193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:39.537432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:39.550412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:39.556812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:39.586946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:39.592413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:39.605527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:39.615226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:39.639920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:39.660239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:39.676630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:30:39.768451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57040","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T09:30:48.457225Z","caller":"traceutil/trace.go:172","msg":"trace[2066850321] transaction","detail":"{read_only:false; response_revision:474; number_of_response:1; }","duration":"104.365296ms","start":"2025-11-01T09:30:48.352847Z","end":"2025-11-01T09:30:48.457212Z","steps":["trace[2066850321] 'process raft request'  (duration: 104.241587ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:30:48.763422Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"197.171452ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:30:48.763704Z","caller":"traceutil/trace.go:172","msg":"trace[1806969416] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:474; }","duration":"197.539219ms","start":"2025-11-01T09:30:48.566148Z","end":"2025-11-01T09:30:48.763687Z","steps":["trace[1806969416] 'range keys from in-memory index tree'  (duration: 197.058894ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:30:48.763792Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.18035ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-855890\" limit:1 ","response":"range_response_count:1 size:6083"}
	{"level":"info","ts":"2025-11-01T09:30:48.763835Z","caller":"traceutil/trace.go:172","msg":"trace[532645769] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-855890; range_end:; response_count:1; response_revision:474; }","duration":"113.232095ms","start":"2025-11-01T09:30:48.650593Z","end":"2025-11-01T09:30:48.763825Z","steps":["trace[532645769] 'range keys from in-memory index tree'  (duration: 113.052623ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:30:48.937936Z","caller":"traceutil/trace.go:172","msg":"trace[1643631866] transaction","detail":"{read_only:false; response_revision:475; number_of_response:1; }","duration":"161.121001ms","start":"2025-11-01T09:30:48.776801Z","end":"2025-11-01T09:30:48.937922Z","steps":["trace[1643631866] 'process raft request'  (duration: 161.002702ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:30:58 up 2 min,  0 users,  load average: 1.86, 0.72, 0.27
	Linux pause-855890 6.6.95 #1 SMP PREEMPT_DYNAMIC Tue Oct 28 16:58:05 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [71b0ac0166bc29fa2a4344cda6805cc8ba9089589fa84ee6e2de3bd9a26d8960] <==
	I1101 09:30:40.515806       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 09:30:40.543893       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:30:40.547820       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 09:30:40.548554       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1101 09:30:40.548784       1 aggregator.go:171] initial CRD sync complete...
	I1101 09:30:40.549250       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 09:30:40.549289       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 09:30:40.549305       1 cache.go:39] Caches are synced for autoregister controller
	I1101 09:30:40.570469       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1101 09:30:40.574877       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 09:30:40.579306       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 09:30:40.581725       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1101 09:30:40.581786       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 09:30:40.581793       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 09:30:40.582337       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 09:30:40.587451       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1101 09:30:40.615413       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 09:30:41.372049       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:30:41.412078       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:30:42.604991       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:30:42.656199       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 09:30:42.705683       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:30:42.723938       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:30:44.665473       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:30:44.966817       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [fecfbd390ded36fbce68790e962191e98ce02780cad561cb62f869a589724115] <==
	W1101 09:30:20.566729       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.566772       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.566808       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.566846       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.566885       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.576829       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.577085       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.577234       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.577325       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.577787       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.578008       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.578151       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.578434       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.578647       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.578857       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.579000       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.579300       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.577798       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.580675       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.581462       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.581852       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.582100       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.582193       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.582549       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1101 09:30:20.583555       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [3e7af9f11cce68fead72b9ad0539401f7ebcdafd692c4284566fa26af1d67752] <==
	
	
	==> kube-controller-manager [8842af31785cd52ebbdafa0959eae8cea4aee7dfa504f74935e64f90709ff576] <==
	I1101 09:30:44.664368       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 09:30:44.667466       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 09:30:44.667653       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 09:30:44.669921       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 09:30:44.670064       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 09:30:44.670128       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 09:30:44.670134       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 09:30:44.670149       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 09:30:44.670416       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 09:30:44.674677       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 09:30:44.674751       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:30:44.704795       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 09:30:44.705705       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1101 09:30:44.707731       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 09:30:44.708472       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 09:30:44.713110       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 09:30:44.713225       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 09:30:44.713270       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 09:30:44.713311       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-855890"
	I1101 09:30:44.713586       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 09:30:44.713828       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 09:30:44.714257       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 09:30:44.714510       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 09:30:44.716586       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 09:30:44.717183       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [1aa8d5c2ef0eef267589224de026d832239ec04df5fee97d3944f096c2351184] <==
	I1101 09:30:41.762216       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:30:41.862826       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:30:41.862902       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.183"]
	E1101 09:30:41.862990       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:30:41.933023       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1101 09:30:41.933095       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 09:30:41.933125       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:30:41.949815       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:30:41.950227       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:30:41.950256       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:30:41.958665       1 config.go:309] "Starting node config controller"
	I1101 09:30:41.958691       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:30:41.958714       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:30:41.958959       1 config.go:200] "Starting service config controller"
	I1101 09:30:41.959414       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:30:41.959096       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:30:41.959534       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:30:41.959119       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:30:41.959621       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:30:42.059869       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:30:42.059909       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 09:30:42.059883       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [c9ddabcf5ff167d0756b588b55f5f1adea8970f8165ccf6a8500ebbc7da0692d] <==
	I1101 09:29:30.487472       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:29:30.587650       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:29:30.587683       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.183"]
	E1101 09:29:30.587775       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:29:30.721926       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1101 09:29:30.722076       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1101 09:29:30.722168       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:29:30.736638       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:29:30.737251       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:29:30.737264       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:29:30.745335       1 config.go:200] "Starting service config controller"
	I1101 09:29:30.745381       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:29:30.745421       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:29:30.745426       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:29:30.745436       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:29:30.745439       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:29:30.753033       1 config.go:309] "Starting node config controller"
	I1101 09:29:30.759985       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:29:30.760001       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:29:30.848811       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:29:30.847325       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:29:30.849821       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2c13e07694a27604a7283ecab14599ca94c057df8e41b2c9ed9bb8fb8b083292] <==
	I1101 09:30:38.567604       1 serving.go:386] Generated self-signed cert in-memory
	I1101 09:30:40.606586       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 09:30:40.606779       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:30:40.616668       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 09:30:40.616743       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 09:30:40.616818       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:30:40.616846       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:30:40.616872       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:30:40.616890       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:30:40.616901       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:30:40.617006       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 09:30:40.718398       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 09:30:40.718538       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:30:40.719071       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kube-scheduler [fe4e98bd2bc2d934f1cf0670af24bc144ea2818c8501b0956e2a6840cec5315b] <==
	E1101 09:29:20.537122       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:29:20.537176       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:29:21.351433       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:29:21.440049       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 09:29:21.450513       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:29:21.454045       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:29:21.498657       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:29:21.516940       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:29:21.527211       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:29:21.592710       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:29:21.616587       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:29:21.644876       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1101 09:29:21.707089       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 09:29:21.730952       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:29:21.785953       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:29:21.811061       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:29:21.873390       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:29:21.967019       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1101 09:29:24.820187       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:30:20.570758       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1101 09:30:20.575594       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1101 09:30:20.577692       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1101 09:30:20.576207       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:30:20.580209       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1101 09:30:20.580250       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 01 09:30:40 pause-855890 kubelet[3636]: I1101 09:30:40.564145    3636 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 01 09:30:40 pause-855890 kubelet[3636]: I1101 09:30:40.565938    3636 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 01 09:30:40 pause-855890 kubelet[3636]: E1101 09:30:40.571332    3636 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-855890\" already exists" pod="kube-system/etcd-pause-855890"
	Nov 01 09:30:40 pause-855890 kubelet[3636]: I1101 09:30:40.571410    3636 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-855890"
	Nov 01 09:30:40 pause-855890 kubelet[3636]: E1101 09:30:40.588901    3636 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-855890\" already exists" pod="kube-system/kube-apiserver-pause-855890"
	Nov 01 09:30:40 pause-855890 kubelet[3636]: I1101 09:30:40.588977    3636 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-855890"
	Nov 01 09:30:40 pause-855890 kubelet[3636]: E1101 09:30:40.625564    3636 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-855890\" already exists" pod="kube-system/kube-controller-manager-pause-855890"
	Nov 01 09:30:40 pause-855890 kubelet[3636]: I1101 09:30:40.625620    3636 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-855890"
	Nov 01 09:30:40 pause-855890 kubelet[3636]: E1101 09:30:40.649287    3636 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-855890\" already exists" pod="kube-system/kube-scheduler-pause-855890"
	Nov 01 09:30:40 pause-855890 kubelet[3636]: I1101 09:30:40.829968    3636 scope.go:117] "RemoveContainer" containerID="3e7af9f11cce68fead72b9ad0539401f7ebcdafd692c4284566fa26af1d67752"
	Nov 01 09:30:41 pause-855890 kubelet[3636]: I1101 09:30:41.219380    3636 apiserver.go:52] "Watching apiserver"
	Nov 01 09:30:41 pause-855890 kubelet[3636]: I1101 09:30:41.268876    3636 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 01 09:30:41 pause-855890 kubelet[3636]: I1101 09:30:41.359197    3636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74e9d2ed-3e06-4b92-b71e-0d3520d7d64b-xtables-lock\") pod \"kube-proxy-9dngv\" (UID: \"74e9d2ed-3e06-4b92-b71e-0d3520d7d64b\") " pod="kube-system/kube-proxy-9dngv"
	Nov 01 09:30:41 pause-855890 kubelet[3636]: I1101 09:30:41.360455    3636 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74e9d2ed-3e06-4b92-b71e-0d3520d7d64b-lib-modules\") pod \"kube-proxy-9dngv\" (UID: \"74e9d2ed-3e06-4b92-b71e-0d3520d7d64b\") " pod="kube-system/kube-proxy-9dngv"
	Nov 01 09:30:41 pause-855890 kubelet[3636]: I1101 09:30:41.510684    3636 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-855890"
	Nov 01 09:30:41 pause-855890 kubelet[3636]: I1101 09:30:41.510693    3636 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-855890"
	Nov 01 09:30:41 pause-855890 kubelet[3636]: I1101 09:30:41.510782    3636 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-855890"
	Nov 01 09:30:41 pause-855890 kubelet[3636]: I1101 09:30:41.511127    3636 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-855890"
	Nov 01 09:30:41 pause-855890 kubelet[3636]: I1101 09:30:41.526111    3636 scope.go:117] "RemoveContainer" containerID="c9ddabcf5ff167d0756b588b55f5f1adea8970f8165ccf6a8500ebbc7da0692d"
	Nov 01 09:30:41 pause-855890 kubelet[3636]: E1101 09:30:41.632042    3636 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-855890\" already exists" pod="kube-system/kube-apiserver-pause-855890"
	Nov 01 09:30:41 pause-855890 kubelet[3636]: E1101 09:30:41.644935    3636 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-pause-855890\" already exists" pod="kube-system/etcd-pause-855890"
	Nov 01 09:30:41 pause-855890 kubelet[3636]: E1101 09:30:41.645651    3636 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-855890\" already exists" pod="kube-system/kube-controller-manager-pause-855890"
	Nov 01 09:30:41 pause-855890 kubelet[3636]: E1101 09:30:41.666933    3636 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-855890\" already exists" pod="kube-system/kube-scheduler-pause-855890"
	Nov 01 09:30:49 pause-855890 kubelet[3636]: E1101 09:30:49.521283    3636 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1761989449520570227  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	Nov 01 09:30:49 pause-855890 kubelet[3636]: E1101 09:30:49.521891    3636 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1761989449520570227  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:127412}  inodes_used:{value:57}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-855890 -n pause-855890
helpers_test.go:269: (dbg) Run:  kubectl --context pause-855890 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (54.14s)

                                                
                                    

Test pass (299/343)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 7.23
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.16
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 3.69
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.15
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.63
22 TestOffline 99.31
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 162.48
31 TestAddons/serial/GCPAuth/Namespaces 0.15
32 TestAddons/serial/GCPAuth/FakeCredentials 10.52
35 TestAddons/parallel/Registry 18.16
36 TestAddons/parallel/RegistryCreds 0.67
38 TestAddons/parallel/InspektorGadget 6.29
39 TestAddons/parallel/MetricsServer 7.17
41 TestAddons/parallel/CSI 63.09
42 TestAddons/parallel/Headlamp 20.79
43 TestAddons/parallel/CloudSpanner 6.57
44 TestAddons/parallel/LocalPath 58.01
45 TestAddons/parallel/NvidiaDevicePlugin 6.73
46 TestAddons/parallel/Yakd 10.92
48 TestAddons/StoppedEnableDisable 84.22
49 TestCertOptions 45.43
50 TestCertExpiration 360.53
52 TestForceSystemdFlag 55.15
53 TestForceSystemdEnv 61.35
58 TestErrorSpam/setup 40.11
59 TestErrorSpam/start 0.34
60 TestErrorSpam/status 0.67
61 TestErrorSpam/pause 1.56
62 TestErrorSpam/unpause 1.71
63 TestErrorSpam/stop 86.75
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 82.06
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 38.29
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.15
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.18
75 TestFunctional/serial/CacheCmd/cache/add_local 1.99
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.18
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.54
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 32.21
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.44
86 TestFunctional/serial/LogsFileCmd 1.43
87 TestFunctional/serial/InvalidService 4.58
89 TestFunctional/parallel/ConfigCmd 0.42
90 TestFunctional/parallel/DashboardCmd 17.02
91 TestFunctional/parallel/DryRun 0.23
92 TestFunctional/parallel/InternationalLanguage 0.13
93 TestFunctional/parallel/StatusCmd 0.87
97 TestFunctional/parallel/ServiceCmdConnect 15.94
98 TestFunctional/parallel/AddonsCmd 0.15
99 TestFunctional/parallel/PersistentVolumeClaim 39.91
101 TestFunctional/parallel/SSHCmd 0.32
102 TestFunctional/parallel/CpCmd 1.06
103 TestFunctional/parallel/MySQL 27.48
104 TestFunctional/parallel/FileSync 0.19
105 TestFunctional/parallel/CertSync 1.01
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.32
113 TestFunctional/parallel/License 0.43
114 TestFunctional/parallel/ServiceCmd/DeployApp 8.19
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
116 TestFunctional/parallel/ProfileCmd/profile_list 0.38
117 TestFunctional/parallel/MountCmd/any-port 19.86
118 TestFunctional/parallel/ProfileCmd/profile_json_output 0.31
128 TestFunctional/parallel/ServiceCmd/List 0.28
129 TestFunctional/parallel/ServiceCmd/JSONOutput 0.25
130 TestFunctional/parallel/ServiceCmd/HTTPS 0.29
131 TestFunctional/parallel/ServiceCmd/Format 0.39
132 TestFunctional/parallel/ServiceCmd/URL 0.3
133 TestFunctional/parallel/MountCmd/specific-port 1.55
134 TestFunctional/parallel/MountCmd/VerifyCleanup 1.19
135 TestFunctional/parallel/Version/short 0.06
136 TestFunctional/parallel/Version/components 0.64
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
141 TestFunctional/parallel/ImageCommands/ImageBuild 3.59
142 TestFunctional/parallel/ImageCommands/Setup 1.6
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.64
144 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
145 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.08
146 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.07
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.85
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.66
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.49
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.6
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.73
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.53
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 209.99
161 TestMultiControlPlane/serial/DeployApp 56.7
162 TestMultiControlPlane/serial/PingHostFromPods 1.28
163 TestMultiControlPlane/serial/AddWorkerNode 44.95
164 TestMultiControlPlane/serial/NodeLabels 0.07
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.67
166 TestMultiControlPlane/serial/CopyFile 10.69
167 TestMultiControlPlane/serial/StopSecondaryNode 78.68
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.51
169 TestMultiControlPlane/serial/RestartSecondaryNode 37.96
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.71
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 383.8
172 TestMultiControlPlane/serial/DeleteSecondaryNode 18.65
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.5
174 TestMultiControlPlane/serial/StopCluster 241.64
175 TestMultiControlPlane/serial/RestartCluster 101.52
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.49
177 TestMultiControlPlane/serial/AddSecondaryNode 78.33
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.68
183 TestJSONOutput/start/Command 54.92
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.7
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.64
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 7.07
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.23
211 TestMainNoArgs 0.06
212 TestMinikubeProfile 81.66
215 TestMountStart/serial/StartWithMountFirst 22.29
216 TestMountStart/serial/VerifyMountFirst 0.3
217 TestMountStart/serial/StartWithMountSecond 22.28
218 TestMountStart/serial/VerifyMountSecond 0.3
219 TestMountStart/serial/DeleteFirst 0.69
220 TestMountStart/serial/VerifyMountPostDelete 0.3
221 TestMountStart/serial/Stop 1.3
222 TestMountStart/serial/RestartStopped 18.14
223 TestMountStart/serial/VerifyMountPostStop 0.31
226 TestMultiNode/serial/FreshStart2Nodes 100.39
227 TestMultiNode/serial/DeployApp2Nodes 6.59
228 TestMultiNode/serial/PingHostFrom2Pods 0.85
229 TestMultiNode/serial/AddNode 43.94
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.46
232 TestMultiNode/serial/CopyFile 5.91
233 TestMultiNode/serial/StopNode 2.29
234 TestMultiNode/serial/StartAfterStop 40.8
235 TestMultiNode/serial/RestartKeepsNodes 298.9
236 TestMultiNode/serial/DeleteNode 2.53
237 TestMultiNode/serial/StopMultiNode 176.38
238 TestMultiNode/serial/RestartMultiNode 122.58
239 TestMultiNode/serial/ValidateNameConflict 40.12
246 TestScheduledStopUnix 109.65
250 TestRunningBinaryUpgrade 120.98
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
256 TestNoKubernetes/serial/StartWithK8s 103.83
257 TestStoppedBinaryUpgrade/Setup 0.46
258 TestStoppedBinaryUpgrade/Upgrade 122.53
259 TestNoKubernetes/serial/StartWithStopK8s 45.32
260 TestNoKubernetes/serial/Start 38.82
261 TestStoppedBinaryUpgrade/MinikubeLogs 1.13
270 TestPause/serial/Start 85.52
271 TestNoKubernetes/serial/VerifyK8sNotRunning 0.16
272 TestNoKubernetes/serial/ProfileList 0.92
273 TestNoKubernetes/serial/Stop 1.23
274 TestNoKubernetes/serial/StartNoArgs 40.65
282 TestNetworkPlugins/group/false 3.74
286 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
287 TestISOImage/Setup 21.57
289 TestISOImage/Binaries/crictl 0.19
290 TestISOImage/Binaries/curl 0.19
291 TestISOImage/Binaries/docker 0.2
292 TestISOImage/Binaries/git 0.18
293 TestISOImage/Binaries/iptables 0.2
294 TestISOImage/Binaries/podman 0.2
295 TestISOImage/Binaries/rsync 0.18
296 TestISOImage/Binaries/socat 0.19
297 TestISOImage/Binaries/wget 0.17
298 TestISOImage/Binaries/VBoxControl 0.17
299 TestISOImage/Binaries/VBoxService 0.2
302 TestStartStop/group/old-k8s-version/serial/FirstStart 96.62
304 TestStartStop/group/no-preload/serial/FirstStart 103.77
305 TestStartStop/group/old-k8s-version/serial/DeployApp 10.32
306 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.16
307 TestStartStop/group/old-k8s-version/serial/Stop 84
308 TestStartStop/group/no-preload/serial/DeployApp 11.34
309 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.13
310 TestStartStop/group/no-preload/serial/Stop 85.86
311 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.13
312 TestStartStop/group/old-k8s-version/serial/SecondStart 44.69
313 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.14
314 TestStartStop/group/no-preload/serial/SecondStart 56.79
315 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 9.01
316 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
317 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
318 TestStartStop/group/old-k8s-version/serial/Pause 2.9
320 TestStartStop/group/embed-certs/serial/FirstStart 54.81
322 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 96.82
323 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 9.01
324 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
325 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
326 TestStartStop/group/no-preload/serial/Pause 3.04
328 TestStartStop/group/newest-cni/serial/FirstStart 46.52
329 TestStartStop/group/embed-certs/serial/DeployApp 11.33
330 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.02
331 TestStartStop/group/embed-certs/serial/Stop 87.78
332 TestStartStop/group/newest-cni/serial/DeployApp 0
333 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.18
334 TestStartStop/group/newest-cni/serial/Stop 11.13
335 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 12.32
336 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.14
337 TestStartStop/group/newest-cni/serial/SecondStart 32.21
338 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.08
339 TestStartStop/group/default-k8s-diff-port/serial/Stop 87.71
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
343 TestStartStop/group/newest-cni/serial/Pause 2.46
344 TestNetworkPlugins/group/auto/Start 90.62
345 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.15
346 TestStartStop/group/embed-certs/serial/SecondStart 55.88
347 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.15
348 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 44.09
349 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
350 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
351 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
352 TestStartStop/group/embed-certs/serial/Pause 2.81
353 TestNetworkPlugins/group/kindnet/Start 91
354 TestNetworkPlugins/group/auto/KubeletFlags 0.2
355 TestNetworkPlugins/group/auto/NetCatPod 11.25
356 TestNetworkPlugins/group/auto/DNS 0.17
357 TestNetworkPlugins/group/auto/Localhost 0.18
358 TestNetworkPlugins/group/auto/HairPin 0.15
359 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
360 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
361 TestNetworkPlugins/group/calico/Start 72.41
362 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
363 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.96
364 TestNetworkPlugins/group/custom-flannel/Start 86.58
365 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
366 TestNetworkPlugins/group/kindnet/KubeletFlags 0.18
367 TestNetworkPlugins/group/kindnet/NetCatPod 10.25
368 TestNetworkPlugins/group/kindnet/DNS 0.15
369 TestNetworkPlugins/group/kindnet/Localhost 0.12
370 TestNetworkPlugins/group/kindnet/HairPin 0.12
371 TestNetworkPlugins/group/calico/ControllerPod 6.01
372 TestNetworkPlugins/group/calico/KubeletFlags 0.23
373 TestNetworkPlugins/group/calico/NetCatPod 12.53
374 TestNetworkPlugins/group/enable-default-cni/Start 83.49
375 TestNetworkPlugins/group/calico/DNS 0.16
376 TestNetworkPlugins/group/calico/Localhost 0.14
377 TestNetworkPlugins/group/calico/HairPin 0.14
378 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
379 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.54
380 TestNetworkPlugins/group/custom-flannel/DNS 0.16
381 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
382 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
383 TestNetworkPlugins/group/flannel/Start 74.57
384 TestNetworkPlugins/group/bridge/Start 94.65
386 TestISOImage/PersistentMounts//data 0.19
387 TestISOImage/PersistentMounts//var/lib/docker 0.19
388 TestISOImage/PersistentMounts//var/lib/cni 0.18
389 TestISOImage/PersistentMounts//var/lib/kubelet 0.17
390 TestISOImage/PersistentMounts//var/lib/minikube 0.18
391 TestISOImage/PersistentMounts//var/lib/toolbox 0.18
392 TestISOImage/PersistentMounts//var/lib/boot2docker 0.17
393 TestISOImage/eBPFSupport 0.17
394 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.17
395 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.26
396 TestNetworkPlugins/group/flannel/ControllerPod 6.01
397 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
398 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
399 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
400 TestNetworkPlugins/group/flannel/KubeletFlags 0.18
401 TestNetworkPlugins/group/flannel/NetCatPod 11.26
402 TestNetworkPlugins/group/flannel/DNS 0.14
403 TestNetworkPlugins/group/flannel/Localhost 0.12
404 TestNetworkPlugins/group/flannel/HairPin 0.13
405 TestNetworkPlugins/group/bridge/KubeletFlags 0.17
406 TestNetworkPlugins/group/bridge/NetCatPod 9.26
407 TestNetworkPlugins/group/bridge/DNS 0.14
408 TestNetworkPlugins/group/bridge/Localhost 0.11
409 TestNetworkPlugins/group/bridge/HairPin 0.11
x
+
TestDownloadOnly/v1.28.0/json-events (7.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-665546 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-665546 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (7.228213096s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (7.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1101 08:29:14.845240    9793 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1101 08:29:14.845338    9793 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-5912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-665546
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-665546: exit status 85 (74.71935ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-665546 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-665546 │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 08:29:07
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 08:29:07.668487    9805 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:29:07.668693    9805 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:29:07.668702    9805 out.go:374] Setting ErrFile to fd 2...
	I1101 08:29:07.668705    9805 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:29:07.668909    9805 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5912/.minikube/bin
	W1101 08:29:07.669016    9805 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21835-5912/.minikube/config/config.json: open /home/jenkins/minikube-integration/21835-5912/.minikube/config/config.json: no such file or directory
	I1101 08:29:07.669495    9805 out.go:368] Setting JSON to true
	I1101 08:29:07.670368    9805 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":695,"bootTime":1761985053,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 08:29:07.670458    9805 start.go:143] virtualization: kvm guest
	I1101 08:29:07.672906    9805 out.go:99] [download-only-665546] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1101 08:29:07.673012    9805 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21835-5912/.minikube/cache/preloaded-tarball: no such file or directory
	I1101 08:29:07.673044    9805 notify.go:221] Checking for updates...
	I1101 08:29:07.674888    9805 out.go:171] MINIKUBE_LOCATION=21835
	I1101 08:29:07.676638    9805 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 08:29:07.681579    9805 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21835-5912/kubeconfig
	I1101 08:29:07.683254    9805 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5912/.minikube
	I1101 08:29:07.684802    9805 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1101 08:29:07.687658    9805 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1101 08:29:07.687872    9805 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 08:29:08.206891    9805 out.go:99] Using the kvm2 driver based on user configuration
	I1101 08:29:08.206929    9805 start.go:309] selected driver: kvm2
	I1101 08:29:08.206936    9805 start.go:930] validating driver "kvm2" against <nil>
	I1101 08:29:08.207300    9805 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 08:29:08.207795    9805 start_flags.go:410] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1101 08:29:08.207966    9805 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1101 08:29:08.207989    9805 cni.go:84] Creating CNI manager for ""
	I1101 08:29:08.208039    9805 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1101 08:29:08.208047    9805 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1101 08:29:08.208118    9805 start.go:353] cluster config:
	{Name:download-only-665546 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-665546 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 08:29:08.208307    9805 iso.go:125] acquiring lock: {Name:mk345092679db7c379cbaa00125c4f18e2b4a125 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 08:29:08.210151    9805 out.go:99] Downloading VM boot image ...
	I1101 08:29:08.210186    9805 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso.sha256 -> /home/jenkins/minikube-integration/21835-5912/.minikube/cache/iso/amd64/minikube-v1.37.0-1761658712-21800-amd64.iso
	I1101 08:29:11.656437    9805 out.go:99] Starting "download-only-665546" primary control-plane node in "download-only-665546" cluster
	I1101 08:29:11.656479    9805 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 08:29:11.679600    9805 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1101 08:29:11.679644    9805 cache.go:59] Caching tarball of preloaded images
	I1101 08:29:11.679868    9805 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 08:29:11.681734    9805 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1101 08:29:11.681757    9805 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1101 08:29:11.707771    9805 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1101 08:29:11.707904    9805 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21835-5912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-665546 host does not exist
	  To start a cluster, run: "minikube start -p download-only-665546"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-665546
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (3.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-362299 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-362299 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (3.687453639s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (3.69s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1101 08:29:18.906149    9793 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1101 08:29:18.906197    9793 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21835-5912/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-362299
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-362299: exit status 85 (72.049111ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                  ARGS                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-665546 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-665546 │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                   │ minikube             │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │ 01 Nov 25 08:29 UTC │
	│ delete  │ -p download-only-665546                                                                                                                                                 │ download-only-665546 │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │ 01 Nov 25 08:29 UTC │
	│ start   │ -o=json --download-only -p download-only-362299 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio │ download-only-362299 │ jenkins │ v1.37.0 │ 01 Nov 25 08:29 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 08:29:15
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 08:29:15.270201   10011 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:29:15.270475   10011 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:29:15.270486   10011 out.go:374] Setting ErrFile to fd 2...
	I1101 08:29:15.270490   10011 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:29:15.270743   10011 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5912/.minikube/bin
	I1101 08:29:15.271265   10011 out.go:368] Setting JSON to true
	I1101 08:29:15.272020   10011 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":702,"bootTime":1761985053,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 08:29:15.272116   10011 start.go:143] virtualization: kvm guest
	I1101 08:29:15.274049   10011 out.go:99] [download-only-362299] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 08:29:15.274254   10011 notify.go:221] Checking for updates...
	I1101 08:29:15.275663   10011 out.go:171] MINIKUBE_LOCATION=21835
	I1101 08:29:15.276950   10011 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 08:29:15.278400   10011 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21835-5912/kubeconfig
	I1101 08:29:15.279872   10011 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5912/.minikube
	I1101 08:29:15.281088   10011 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-362299 host does not exist
	  To start a cluster, run: "minikube start -p download-only-362299"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-362299
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
I1101 08:29:19.547715    9793 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-153470 --alsologtostderr --binary-mirror http://127.0.0.1:38639 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-153470" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-153470
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestOffline (99.31s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-712427 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-712427 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (1m38.341265571s)
helpers_test.go:175: Cleaning up "offline-crio-712427" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-712427
--- PASS: TestOffline (99.31s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-468489
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-468489: exit status 85 (63.009401ms)

                                                
                                                
-- stdout --
	* Profile "addons-468489" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-468489"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-468489
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-468489: exit status 85 (60.397214ms)

                                                
                                                
-- stdout --
	* Profile "addons-468489" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-468489"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (162.48s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-468489 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-468489 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m42.478453811s)
--- PASS: TestAddons/Setup (162.48s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-468489 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-468489 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.52s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-468489 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-468489 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [41aabf94-d190-48f2-ba3e-eab75a7075ad] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [41aabf94-d190-48f2-ba3e-eab75a7075ad] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.004054045s
addons_test.go:694: (dbg) Run:  kubectl --context addons-468489 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-468489 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-468489 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.52s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.16s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 9.461918ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-xfrhn" [f3392fde-46f3-42dc-832d-20224c4f0549] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004376157s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-rhvsz" [55e49aa2-d062-47e2-8c75-d338178ea4a8] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.005160107s
addons_test.go:392: (dbg) Run:  kubectl --context addons-468489 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-468489 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-468489 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.357285416s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-468489 ip
2025/11/01 08:32:39 [DEBUG] GET http://192.168.39.108:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-468489 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.16s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.67s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 5.280412ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-468489
addons_test.go:332: (dbg) Run:  kubectl --context addons-468489 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-468489 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.67s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.29s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-gv7nr" [c1d68823-6547-42f4-8cfa-83aa02d048e0] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003776336s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-468489 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.29s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.17s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 9.424339ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-fq64r" [fa41a986-93b3-4aff-bb56-494cf440e1f9] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005817135s
addons_test.go:463: (dbg) Run:  kubectl --context addons-468489 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-468489 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-468489 addons disable metrics-server --alsologtostderr -v=1: (1.079438642s)
--- PASS: TestAddons/parallel/MetricsServer (7.17s)

                                                
                                    
x
+
TestAddons/parallel/CSI (63.09s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1101 08:32:28.818045    9793 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1101 08:32:28.828761    9793 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1101 08:32:28.828799    9793 kapi.go:107] duration metric: took 10.756101ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 10.77766ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-468489 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468489 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468489 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468489 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468489 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468489 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468489 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468489 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468489 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468489 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468489 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468489 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468489 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468489 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468489 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468489 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468489 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468489 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468489 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468489 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-468489 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [fa68456f-644b-4535-9a54-04dcc6e09135] Pending
helpers_test.go:352: "task-pv-pod" [fa68456f-644b-4535-9a54-04dcc6e09135] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [fa68456f-644b-4535-9a54-04dcc6e09135] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.004263258s
addons_test.go:572: (dbg) Run:  kubectl --context addons-468489 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-468489 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-468489 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-468489 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-468489 delete pod task-pv-pod: (1.346922919s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-468489 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-468489 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468489 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468489 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468489 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468489 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468489 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468489 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468489 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468489 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468489 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468489 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468489 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-468489 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [db673f88-541f-4919-89f1-c59ba0087f5a] Pending
helpers_test.go:352: "task-pv-pod-restore" [db673f88-541f-4919-89f1-c59ba0087f5a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [db673f88-541f-4919-89f1-c59ba0087f5a] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004838394s
addons_test.go:614: (dbg) Run:  kubectl --context addons-468489 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-468489 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-468489 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-468489 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-468489 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-468489 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.839980857s)
--- PASS: TestAddons/parallel/CSI (63.09s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (20.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-468489 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-xzhx5" [f2d62746-aee6-4574-8581-a90a1ac5f656] Pending
helpers_test.go:352: "headlamp-6945c6f4d-xzhx5" [f2d62746-aee6-4574-8581-a90a1ac5f656] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-xzhx5" [f2d62746-aee6-4574-8581-a90a1ac5f656] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.007174025s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-468489 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-468489 addons disable headlamp --alsologtostderr -v=1: (5.893872438s)
--- PASS: TestAddons/parallel/Headlamp (20.79s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-jzn4j" [cf282896-bb5c-4220-a1f1-f23c9286c3bd] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003702519s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-468489 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.57s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (58.01s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-468489 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-468489 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468489 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468489 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468489 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468489 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468489 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468489 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468489 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-468489 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [bd54a517-318a-4093-b03e-dd39a121334f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [bd54a517-318a-4093-b03e-dd39a121334f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [bd54a517-318a-4093-b03e-dd39a121334f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 7.004875909s
addons_test.go:967: (dbg) Run:  kubectl --context addons-468489 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-468489 ssh "cat /opt/local-path-provisioner/pvc-cd2a8e6f-0b78-44b3-86d7-51ee5b835709_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-468489 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-468489 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-468489 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-468489 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.184012885s)
--- PASS: TestAddons/parallel/LocalPath (58.01s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.73s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-f2qxl" [ec4ee384-540b-4a75-84b3-4e570d3d9f23] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005372549s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-468489 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.73s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-zj9fx" [d478a4bd-f7a9-42ec-8531-7886369db3bf] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.005501078s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-468489 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-468489 addons disable yakd --alsologtostderr -v=1: (5.911729529s)
--- PASS: TestAddons/parallel/Yakd (10.92s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (84.22s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-468489
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-468489: (1m24.01870048s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-468489
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-468489
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-468489
--- PASS: TestAddons/StoppedEnableDisable (84.22s)

                                                
                                    
x
+
TestCertOptions (45.43s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-414547 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-414547 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (44.022706625s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-414547 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-414547 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-414547 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-414547" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-414547
--- PASS: TestCertOptions (45.43s)

                                                
                                    
x
+
TestCertExpiration (360.53s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-602924 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-602924 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (56.679449429s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-602924 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-602924 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (2m2.911856461s)
helpers_test.go:175: Cleaning up "cert-expiration-602924" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-602924
--- PASS: TestCertExpiration (360.53s)

                                                
                                    
x
+
TestForceSystemdFlag (55.15s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-806647 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-806647 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (54.053750975s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-806647 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-806647" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-806647
--- PASS: TestForceSystemdFlag (55.15s)

                                                
                                    
x
+
TestForceSystemdEnv (61.35s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-822918 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-822918 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m0.414463322s)
helpers_test.go:175: Cleaning up "force-systemd-env-822918" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-822918
--- PASS: TestForceSystemdEnv (61.35s)

                                                
                                    
x
+
TestErrorSpam/setup (40.11s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-884683 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-884683 --driver=kvm2  --container-runtime=crio
E1101 08:37:03.369099    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:37:03.375560    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:37:03.386977    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:37:03.408469    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:37:03.449936    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:37:03.531391    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:37:03.692929    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:37:04.014689    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:37:04.656003    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:37:05.937595    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:37:08.499302    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:37:13.621108    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-884683 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-884683 --driver=kvm2  --container-runtime=crio: (40.112759805s)
--- PASS: TestErrorSpam/setup (40.11s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-884683 --log_dir /tmp/nospam-884683 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-884683 --log_dir /tmp/nospam-884683 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-884683 --log_dir /tmp/nospam-884683 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.67s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-884683 --log_dir /tmp/nospam-884683 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-884683 --log_dir /tmp/nospam-884683 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-884683 --log_dir /tmp/nospam-884683 status
--- PASS: TestErrorSpam/status (0.67s)

                                                
                                    
x
+
TestErrorSpam/pause (1.56s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-884683 --log_dir /tmp/nospam-884683 pause
E1101 08:37:23.863184    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-884683 --log_dir /tmp/nospam-884683 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-884683 --log_dir /tmp/nospam-884683 pause
--- PASS: TestErrorSpam/pause (1.56s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.71s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-884683 --log_dir /tmp/nospam-884683 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-884683 --log_dir /tmp/nospam-884683 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-884683 --log_dir /tmp/nospam-884683 unpause
--- PASS: TestErrorSpam/unpause (1.71s)

                                                
                                    
x
+
TestErrorSpam/stop (86.75s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-884683 --log_dir /tmp/nospam-884683 stop
E1101 08:37:44.345501    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:38:25.308559    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-884683 --log_dir /tmp/nospam-884683 stop: (1m23.258817386s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-884683 --log_dir /tmp/nospam-884683 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-884683 --log_dir /tmp/nospam-884683 stop: (2.008352638s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-884683 --log_dir /tmp/nospam-884683 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-884683 --log_dir /tmp/nospam-884683 stop: (1.483159821s)
--- PASS: TestErrorSpam/stop (86.75s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21835-5912/.minikube/files/etc/test/nested/copy/9793/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (82.06s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-146919 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1101 08:39:47.233369    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-146919 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m22.064330566s)
--- PASS: TestFunctional/serial/StartWithProxy (82.06s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.29s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1101 08:40:15.914856    9793 config.go:182] Loaded profile config "functional-146919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-146919 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-146919 --alsologtostderr -v=8: (38.289337423s)
functional_test.go:678: soft start took 38.290030947s for "functional-146919" cluster.
I1101 08:40:54.204557    9793 config.go:182] Loaded profile config "functional-146919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (38.29s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-146919 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-146919 cache add registry.k8s.io/pause:3.1: (1.031686686s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-146919 cache add registry.k8s.io/pause:3.3: (1.082021696s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-146919 cache add registry.k8s.io/pause:latest: (1.063049621s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-146919 /tmp/TestFunctionalserialCacheCmdcacheadd_local1415159286/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 cache add minikube-local-cache-test:functional-146919
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-146919 cache add minikube-local-cache-test:functional-146919: (1.62834641s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 cache delete minikube-local-cache-test:functional-146919
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-146919
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-146919 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (170.876648ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 kubectl -- --context functional-146919 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-146919 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.21s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-146919 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-146919 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.207252206s)
functional_test.go:776: restart took 32.207439827s for "functional-146919" cluster.
I1101 08:41:33.946577    9793 config.go:182] Loaded profile config "functional-146919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (32.21s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-146919 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-146919 logs: (1.442799187s)
--- PASS: TestFunctional/serial/LogsCmd (1.44s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 logs --file /tmp/TestFunctionalserialLogsFileCmd2770232416/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-146919 logs --file /tmp/TestFunctionalserialLogsFileCmd2770232416/001/logs.txt: (1.427619438s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.43s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.58s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-146919 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-146919
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-146919: exit status 115 (227.357052ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.186:30853 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-146919 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-146919 delete -f testdata/invalidsvc.yaml: (1.106721712s)
--- PASS: TestFunctional/serial/InvalidService (4.58s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-146919 config get cpus: exit status 14 (62.28245ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-146919 config get cpus: exit status 14 (61.388681ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (17.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-146919 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-146919 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 16087: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (17.02s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-146919 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-146919 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (114.12587ms)

                                                
                                                
-- stdout --
	* [functional-146919] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21835
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21835-5912/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5912/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:42:07.438644   16106 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:42:07.438738   16106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:42:07.438744   16106 out.go:374] Setting ErrFile to fd 2...
	I1101 08:42:07.438749   16106 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:42:07.439003   16106 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5912/.minikube/bin
	I1101 08:42:07.439431   16106 out.go:368] Setting JSON to false
	I1101 08:42:07.440274   16106 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1474,"bootTime":1761985053,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 08:42:07.440387   16106 start.go:143] virtualization: kvm guest
	I1101 08:42:07.441684   16106 out.go:179] * [functional-146919] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 08:42:07.443458   16106 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 08:42:07.443468   16106 notify.go:221] Checking for updates...
	I1101 08:42:07.444818   16106 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 08:42:07.446350   16106 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-5912/kubeconfig
	I1101 08:42:07.447655   16106 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5912/.minikube
	I1101 08:42:07.449238   16106 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 08:42:07.450451   16106 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 08:42:07.452103   16106 config.go:182] Loaded profile config "functional-146919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:42:07.452566   16106 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 08:42:07.485755   16106 out.go:179] * Using the kvm2 driver based on existing profile
	I1101 08:42:07.486964   16106 start.go:309] selected driver: kvm2
	I1101 08:42:07.486979   16106 start.go:930] validating driver "kvm2" against &{Name:functional-146919 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-146919 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.186 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 08:42:07.487098   16106 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 08:42:07.488974   16106 out.go:203] 
	W1101 08:42:07.490121   16106 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1101 08:42:07.491198   16106 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-146919 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-146919 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-146919 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (132.259652ms)

                                                
                                                
-- stdout --
	* [functional-146919] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21835
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21835-5912/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5912/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:42:07.312777   16080 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:42:07.312888   16080 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:42:07.312902   16080 out.go:374] Setting ErrFile to fd 2...
	I1101 08:42:07.312907   16080 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:42:07.313238   16080 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5912/.minikube/bin
	I1101 08:42:07.313684   16080 out.go:368] Setting JSON to false
	I1101 08:42:07.314724   16080 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":1474,"bootTime":1761985053,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 08:42:07.314824   16080 start.go:143] virtualization: kvm guest
	I1101 08:42:07.320379   16080 out.go:179] * [functional-146919] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1101 08:42:07.322078   16080 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 08:42:07.322106   16080 notify.go:221] Checking for updates...
	I1101 08:42:07.324929   16080 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 08:42:07.326329   16080 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-5912/kubeconfig
	I1101 08:42:07.327623   16080 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5912/.minikube
	I1101 08:42:07.328935   16080 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 08:42:07.330326   16080 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 08:42:07.332475   16080 config.go:182] Loaded profile config "functional-146919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:42:07.333089   16080 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 08:42:07.368788   16080 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1101 08:42:07.370397   16080 start.go:309] selected driver: kvm2
	I1101 08:42:07.370415   16080 start.go:930] validating driver "kvm2" against &{Name:functional-146919 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.1 ClusterName:functional-146919 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.186 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 08:42:07.370547   16080 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 08:42:07.372846   16080 out.go:203] 
	W1101 08:42:07.374672   16080 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1101 08:42:07.376143   16080 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (15.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-146919 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-146919 expose deployment hello-node-connect --type=NodePort --port=8080
I1101 08:41:51.489052    9793 detect.go:223] nested VM detected
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-sfzzx" [45398303-03b7-43c5-a337-624559fd214b] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-sfzzx" [45398303-03b7-43c5-a337-624559fd214b] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 15.261088488s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.186:30121
functional_test.go:1680: http://192.168.39.186:30121: success! body:
Request served by hello-node-connect-7d85dfc575-sfzzx

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.186:30121
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (15.94s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (39.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [45a81a43-bd64-40ff-8249-6fb924bedb03] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003871901s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-146919 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-146919 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-146919 get pvc myclaim -o=json
I1101 08:41:49.736398    9793 retry.go:31] will retry after 1.490882496s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:0732c1da-4d2c-45fb-80a6-b1b4eb080561 ResourceVersion:724 Generation:0 CreationTimestamp:2025-11-01 08:41:49 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc0017ef2c0 VolumeMode:0xc0017ef2d0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-146919 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-146919 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [0912ddbb-e4e3-493e-9aec-1e04d01ad4a4] Pending
helpers_test.go:352: "sp-pod" [0912ddbb-e4e3-493e-9aec-1e04d01ad4a4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [0912ddbb-e4e3-493e-9aec-1e04d01ad4a4] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.006685457s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-146919 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-146919 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-146919 delete -f testdata/storage-provisioner/pod.yaml: (2.388779625s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-146919 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [cec0baef-f7ae-4064-801b-f87159dfb504] Pending
helpers_test.go:352: "sp-pod" [cec0baef-f7ae-4064-801b-f87159dfb504] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [cec0baef-f7ae-4064-801b-f87159dfb504] Running
2025/11/01 08:42:21 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004906556s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-146919 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (39.91s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 ssh -n functional-146919 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 cp functional-146919:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3836106652/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 ssh -n functional-146919 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 ssh -n functional-146919 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-146919 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-5wbqw" [2305f1d5-872b-4e63-a1af-72456f6d4e3c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-5wbqw" [2305f1d5-872b-4e63-a1af-72456f6d4e3c] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.013684423s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-146919 exec mysql-5bb876957f-5wbqw -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-146919 exec mysql-5bb876957f-5wbqw -- mysql -ppassword -e "show databases;": exit status 1 (493.049489ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1101 08:42:03.253896    9793 retry.go:31] will retry after 575.275486ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-146919 exec mysql-5bb876957f-5wbqw -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-146919 exec mysql-5bb876957f-5wbqw -- mysql -ppassword -e "show databases;": exit status 1 (379.260816ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1101 08:42:04.208684    9793 retry.go:31] will retry after 1.176855537s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-146919 exec mysql-5bb876957f-5wbqw -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-146919 exec mysql-5bb876957f-5wbqw -- mysql -ppassword -e "show databases;": exit status 1 (342.759114ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1101 08:42:05.728872    9793 retry.go:31] will retry after 3.075764955s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-146919 exec mysql-5bb876957f-5wbqw -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.48s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/9793/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 ssh "sudo cat /etc/test/nested/copy/9793/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/9793.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 ssh "sudo cat /etc/ssl/certs/9793.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/9793.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 ssh "sudo cat /usr/share/ca-certificates/9793.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/97932.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 ssh "sudo cat /etc/ssl/certs/97932.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/97932.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 ssh "sudo cat /usr/share/ca-certificates/97932.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-146919 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-146919 ssh "sudo systemctl is-active docker": exit status 1 (153.807671ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-146919 ssh "sudo systemctl is-active containerd": exit status 1 (164.597814ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-146919 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-146919 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-k9jlm" [c79081d0-9539-40aa-96f4-7a9e2143facb] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-k9jlm" [c79081d0-9539-40aa-96f4-7a9e2143facb] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.005542954s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "323.921969ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "57.720207ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (19.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-146919 /tmp/TestFunctionalparallelMountCmdany-port310433959/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1761986502338637665" to /tmp/TestFunctionalparallelMountCmdany-port310433959/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1761986502338637665" to /tmp/TestFunctionalparallelMountCmdany-port310433959/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1761986502338637665" to /tmp/TestFunctionalparallelMountCmdany-port310433959/001/test-1761986502338637665
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-146919 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (201.730282ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1101 08:41:42.540726    9793 retry.go:31] will retry after 293.728791ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  1 08:41 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  1 08:41 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  1 08:41 test-1761986502338637665
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 ssh cat /mount-9p/test-1761986502338637665
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-146919 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [e355a377-f281-4e18-8152-1f1bf5a55a37] Pending
helpers_test.go:352: "busybox-mount" [e355a377-f281-4e18-8152-1f1bf5a55a37] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [e355a377-f281-4e18-8152-1f1bf5a55a37] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [e355a377-f281-4e18-8152-1f1bf5a55a37] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 18.004358884s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-146919 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-146919 /tmp/TestFunctionalparallelMountCmdany-port310433959/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (19.86s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "248.236502ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "61.790912ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 service list -o json
functional_test.go:1504: Took "246.732953ms" to run "out/minikube-linux-amd64 -p functional-146919 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.186:30528
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.186:30528
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-146919 /tmp/TestFunctionalparallelMountCmdspecific-port1398413323/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-146919 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (261.450129ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1101 08:42:02.458657    9793 retry.go:31] will retry after 484.02825ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-146919 /tmp/TestFunctionalparallelMountCmdspecific-port1398413323/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
E1101 08:42:03.360615    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-146919 ssh "sudo umount -f /mount-9p": exit status 1 (201.18958ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-146919 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-146919 /tmp/TestFunctionalparallelMountCmdspecific-port1398413323/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-146919 /tmp/TestFunctionalparallelMountCmdVerifyCleanup686794912/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-146919 /tmp/TestFunctionalparallelMountCmdVerifyCleanup686794912/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-146919 /tmp/TestFunctionalparallelMountCmdVerifyCleanup686794912/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-146919 ssh "findmnt -T" /mount1: exit status 1 (232.728361ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1101 08:42:03.980757    9793 retry.go:31] will retry after 388.509903ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-146919 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-146919 /tmp/TestFunctionalparallelMountCmdVerifyCleanup686794912/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-146919 /tmp/TestFunctionalparallelMountCmdVerifyCleanup686794912/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-146919 /tmp/TestFunctionalparallelMountCmdVerifyCleanup686794912/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-146919 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-146919
localhost/kicbase/echo-server:functional-146919
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-146919 image ls --format short --alsologtostderr:
I1101 08:42:18.527484   16586 out.go:360] Setting OutFile to fd 1 ...
I1101 08:42:18.527587   16586 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 08:42:18.527592   16586 out.go:374] Setting ErrFile to fd 2...
I1101 08:42:18.527596   16586 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 08:42:18.527803   16586 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5912/.minikube/bin
I1101 08:42:18.528378   16586 config.go:182] Loaded profile config "functional-146919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 08:42:18.528487   16586 config.go:182] Loaded profile config "functional-146919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 08:42:18.531038   16586 ssh_runner.go:195] Run: systemctl --version
I1101 08:42:18.533264   16586 main.go:143] libmachine: domain functional-146919 has defined MAC address 52:54:00:63:9e:69 in network mk-functional-146919
I1101 08:42:18.533746   16586 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:63:9e:69", ip: ""} in network mk-functional-146919: {Iface:virbr1 ExpiryTime:2025-11-01 09:39:09 +0000 UTC Type:0 Mac:52:54:00:63:9e:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:functional-146919 Clientid:01:52:54:00:63:9e:69}
I1101 08:42:18.533775   16586 main.go:143] libmachine: domain functional-146919 has defined IP address 192.168.39.186 and MAC address 52:54:00:63:9e:69 in network mk-functional-146919
I1101 08:42:18.534085   16586 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/functional-146919/id_rsa Username:docker}
I1101 08:42:18.644041   16586 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-146919 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kicbase/echo-server           │ latest             │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-146919  │ 9056ab77afb8e │ 4.94MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ localhost/minikube-local-cache-test     │ functional-146919  │ 8c478330b35af │ 3.33kB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/nginx                 │ latest             │ 9d0e6f6199dcb │ 155MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-146919 image ls --format table --alsologtostderr:
I1101 08:42:18.798057   16609 out.go:360] Setting OutFile to fd 1 ...
I1101 08:42:18.798158   16609 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 08:42:18.798162   16609 out.go:374] Setting ErrFile to fd 2...
I1101 08:42:18.798166   16609 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 08:42:18.798467   16609 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5912/.minikube/bin
I1101 08:42:18.799022   16609 config.go:182] Loaded profile config "functional-146919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 08:42:18.799116   16609 config.go:182] Loaded profile config "functional-146919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 08:42:18.801300   16609 ssh_runner.go:195] Run: systemctl --version
I1101 08:42:18.803928   16609 main.go:143] libmachine: domain functional-146919 has defined MAC address 52:54:00:63:9e:69 in network mk-functional-146919
I1101 08:42:18.804545   16609 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:63:9e:69", ip: ""} in network mk-functional-146919: {Iface:virbr1 ExpiryTime:2025-11-01 09:39:09 +0000 UTC Type:0 Mac:52:54:00:63:9e:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:functional-146919 Clientid:01:52:54:00:63:9e:69}
I1101 08:42:18.804582   16609 main.go:143] libmachine: domain functional-146919 has defined IP address 192.168.39.186 and MAC address 52:54:00:63:9e:69 in network mk-functional-146919
I1101 08:42:18.804732   16609 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/functional-146919/id_rsa Username:docker}
I1101 08:42:18.890127   16609 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-146919 image ls --format json --alsologtostderr:
[{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","r
epoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"9d0e6f6199dcb6e045dad103064601d730fcfaf8d1bd357d969fb70bd5b90dec","repoDigests":["docker.io/library/nginx@sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58","docker.io/library/nginx@sha256:f547e3d0d5d02f7009737b284abc87d808e4252b42dceea361811e9fc606287f"],"repoTags":["docker.io/library/nginx:latest"],"size":"155489797"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":
"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"115053965e86b2df4d78af78d7951b864
4839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.
34.1"],"size":"76004181"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"52546a367cc9e0d924aa3b190596a9167f
a6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae
68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-146919"],"size":"4944818"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"8c478330b35afc6ab16e0b31af11e2471c1e4c444fb93dc306add323257e2074","repoDigests":["localhost/minikube-local-cache-test@sha256:0d0be5b57b1cdcf420b109984af90a8e1a8701ae175e0637f698ec5a9676d7af"],"repoTags":["localhost/minikube-local-cache-test:functional-146919"],"size":"3330"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-146919 image ls --format json --alsologtostderr:
I1101 08:42:18.799957   16608 out.go:360] Setting OutFile to fd 1 ...
I1101 08:42:18.800051   16608 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 08:42:18.800059   16608 out.go:374] Setting ErrFile to fd 2...
I1101 08:42:18.800063   16608 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 08:42:18.800266   16608 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5912/.minikube/bin
I1101 08:42:18.800777   16608 config.go:182] Loaded profile config "functional-146919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 08:42:18.800867   16608 config.go:182] Loaded profile config "functional-146919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 08:42:18.802775   16608 ssh_runner.go:195] Run: systemctl --version
I1101 08:42:18.805554   16608 main.go:143] libmachine: domain functional-146919 has defined MAC address 52:54:00:63:9e:69 in network mk-functional-146919
I1101 08:42:18.806037   16608 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:63:9e:69", ip: ""} in network mk-functional-146919: {Iface:virbr1 ExpiryTime:2025-11-01 09:39:09 +0000 UTC Type:0 Mac:52:54:00:63:9e:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:functional-146919 Clientid:01:52:54:00:63:9e:69}
I1101 08:42:18.806072   16608 main.go:143] libmachine: domain functional-146919 has defined IP address 192.168.39.186 and MAC address 52:54:00:63:9e:69 in network mk-functional-146919
I1101 08:42:18.806239   16608 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/functional-146919/id_rsa Username:docker}
I1101 08:42:18.896261   16608 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-146919 image ls --format yaml --alsologtostderr:
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-146919
size: "4944818"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 9d0e6f6199dcb6e045dad103064601d730fcfaf8d1bd357d969fb70bd5b90dec
repoDigests:
- docker.io/library/nginx@sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58
- docker.io/library/nginx@sha256:f547e3d0d5d02f7009737b284abc87d808e4252b42dceea361811e9fc606287f
repoTags:
- docker.io/library/nginx:latest
size: "155489797"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 8c478330b35afc6ab16e0b31af11e2471c1e4c444fb93dc306add323257e2074
repoDigests:
- localhost/minikube-local-cache-test@sha256:0d0be5b57b1cdcf420b109984af90a8e1a8701ae175e0637f698ec5a9676d7af
repoTags:
- localhost/minikube-local-cache-test:functional-146919
size: "3330"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-146919 image ls --format yaml --alsologtostderr:
I1101 08:42:18.530837   16587 out.go:360] Setting OutFile to fd 1 ...
I1101 08:42:18.530960   16587 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 08:42:18.530971   16587 out.go:374] Setting ErrFile to fd 2...
I1101 08:42:18.530977   16587 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 08:42:18.531260   16587 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5912/.minikube/bin
I1101 08:42:18.531945   16587 config.go:182] Loaded profile config "functional-146919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 08:42:18.532080   16587 config.go:182] Loaded profile config "functional-146919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 08:42:18.534057   16587 ssh_runner.go:195] Run: systemctl --version
I1101 08:42:18.536539   16587 main.go:143] libmachine: domain functional-146919 has defined MAC address 52:54:00:63:9e:69 in network mk-functional-146919
I1101 08:42:18.536975   16587 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:63:9e:69", ip: ""} in network mk-functional-146919: {Iface:virbr1 ExpiryTime:2025-11-01 09:39:09 +0000 UTC Type:0 Mac:52:54:00:63:9e:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:functional-146919 Clientid:01:52:54:00:63:9e:69}
I1101 08:42:18.537016   16587 main.go:143] libmachine: domain functional-146919 has defined IP address 192.168.39.186 and MAC address 52:54:00:63:9e:69 in network mk-functional-146919
I1101 08:42:18.537170   16587 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/functional-146919/id_rsa Username:docker}
I1101 08:42:18.644262   16587 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-146919 ssh pgrep buildkitd: exit status 1 (168.770352ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 image build -t localhost/my-image:functional-146919 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-146919 image build -t localhost/my-image:functional-146919 testdata/build --alsologtostderr: (3.212715667s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-146919 image build -t localhost/my-image:functional-146919 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 4d5b3e1367b
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-146919
--> 89faf1b965d
Successfully tagged localhost/my-image:functional-146919
89faf1b965d1f7330be8a7503087b86bc72fad6917e116024bc1de3de8ba1035
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-146919 image build -t localhost/my-image:functional-146919 testdata/build --alsologtostderr:
I1101 08:42:19.169275   16639 out.go:360] Setting OutFile to fd 1 ...
I1101 08:42:19.169556   16639 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 08:42:19.169566   16639 out.go:374] Setting ErrFile to fd 2...
I1101 08:42:19.169570   16639 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 08:42:19.169754   16639 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5912/.minikube/bin
I1101 08:42:19.170364   16639 config.go:182] Loaded profile config "functional-146919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 08:42:19.170998   16639 config.go:182] Loaded profile config "functional-146919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 08:42:19.173116   16639 ssh_runner.go:195] Run: systemctl --version
I1101 08:42:19.175166   16639 main.go:143] libmachine: domain functional-146919 has defined MAC address 52:54:00:63:9e:69 in network mk-functional-146919
I1101 08:42:19.175559   16639 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:63:9e:69", ip: ""} in network mk-functional-146919: {Iface:virbr1 ExpiryTime:2025-11-01 09:39:09 +0000 UTC Type:0 Mac:52:54:00:63:9e:69 Iaid: IPaddr:192.168.39.186 Prefix:24 Hostname:functional-146919 Clientid:01:52:54:00:63:9e:69}
I1101 08:42:19.175580   16639 main.go:143] libmachine: domain functional-146919 has defined IP address 192.168.39.186 and MAC address 52:54:00:63:9e:69 in network mk-functional-146919
I1101 08:42:19.175692   16639 sshutil.go:53] new ssh client: &{IP:192.168.39.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/functional-146919/id_rsa Username:docker}
I1101 08:42:19.253782   16639 build_images.go:162] Building image from path: /tmp/build.3077569793.tar
I1101 08:42:19.253895   16639 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1101 08:42:19.267591   16639 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3077569793.tar
I1101 08:42:19.272535   16639 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3077569793.tar: stat -c "%s %y" /var/lib/minikube/build/build.3077569793.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3077569793.tar': No such file or directory
I1101 08:42:19.272572   16639 ssh_runner.go:362] scp /tmp/build.3077569793.tar --> /var/lib/minikube/build/build.3077569793.tar (3072 bytes)
I1101 08:42:19.303602   16639 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3077569793
I1101 08:42:19.315354   16639 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3077569793 -xf /var/lib/minikube/build/build.3077569793.tar
I1101 08:42:19.326950   16639 crio.go:315] Building image: /var/lib/minikube/build/build.3077569793
I1101 08:42:19.327027   16639 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-146919 /var/lib/minikube/build/build.3077569793 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1101 08:42:22.295627   16639 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-146919 /var/lib/minikube/build/build.3077569793 --cgroup-manager=cgroupfs: (2.96856214s)
I1101 08:42:22.295715   16639 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3077569793
I1101 08:42:22.308642   16639 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3077569793.tar
I1101 08:42:22.320909   16639 build_images.go:218] Built localhost/my-image:functional-146919 from /tmp/build.3077569793.tar
I1101 08:42:22.320945   16639 build_images.go:134] succeeded building to: functional-146919
I1101 08:42:22.320950   16639 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.574371771s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-146919
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 image load --daemon kicbase/echo-server:functional-146919 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-146919 image load --daemon kicbase/echo-server:functional-146919 --alsologtostderr: (1.440794626s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 image load --daemon kicbase/echo-server:functional-146919 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-146919
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 image load --daemon kicbase/echo-server:functional-146919 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 image save kicbase/echo-server:functional-146919 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:395: (dbg) Done: out/minikube-linux-amd64 -p functional-146919 image save kicbase/echo-server:functional-146919 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (2.491494222s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 image rm kicbase/echo-server:functional-146919 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
I1101 08:42:17.227659    9793 detect.go:223] nested VM detected
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-146919
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-146919 image save --daemon kicbase/echo-server:functional-146919 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-146919
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.53s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-146919
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-146919
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-146919
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (209.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1101 08:42:31.075291    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-750582 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (3m29.431382341s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (209.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (56.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-750582 kubectl -- rollout status deployment/busybox: (6.232960705s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.3.2 10.244.3.3 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I1101 08:46:01.984429    9793 retry.go:31] will retry after 1.074067679s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.3.2 10.244.3.3 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.3.2 10.244.3.3 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I1101 08:46:03.192220    9793 retry.go:31] will retry after 1.001602155s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.3.2 10.244.3.3 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.3.2 10.244.3.3 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I1101 08:46:04.326037    9793 retry.go:31] will retry after 1.556732772s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.3.2 10.244.3.3 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.3.2 10.244.3.3 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I1101 08:46:06.017798    9793 retry.go:31] will retry after 3.383551511s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.3.2 10.244.3.3 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.3.2 10.244.3.3 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I1101 08:46:09.540898    9793 retry.go:31] will retry after 3.789266148s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.3.2 10.244.3.3 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.3.2 10.244.3.3 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I1101 08:46:13.461620    9793 retry.go:31] will retry after 10.537868878s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.3.2 10.244.3.3 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.3.2 10.244.3.3 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I1101 08:46:24.138534    9793 retry.go:31] will retry after 7.149550062s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.3.2 10.244.3.3 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.3.2 10.244.3.3 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
I1101 08:46:31.439650    9793 retry.go:31] will retry after 18.498103708s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.3.2 10.244.3.3 10.244.0.4 10.244.1.2'\n\n-- /stdout --"
E1101 08:46:41.649457    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/functional-146919/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:46:41.655856    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/functional-146919/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:46:41.667271    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/functional-146919/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:46:41.688665    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/functional-146919/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:46:41.730096    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/functional-146919/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:46:41.811553    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/functional-146919/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:46:41.973106    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/functional-146919/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:46:42.294852    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/functional-146919/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:46:42.936309    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/functional-146919/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:46:44.217960    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/functional-146919/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:46:46.779962    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/functional-146919/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 kubectl -- exec busybox-7b57f96db7-f5b49 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 kubectl -- exec busybox-7b57f96db7-f7dc4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 kubectl -- exec busybox-7b57f96db7-jtczk -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 kubectl -- exec busybox-7b57f96db7-f5b49 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 kubectl -- exec busybox-7b57f96db7-f7dc4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 kubectl -- exec busybox-7b57f96db7-jtczk -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 kubectl -- exec busybox-7b57f96db7-f5b49 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 kubectl -- exec busybox-7b57f96db7-f7dc4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 kubectl -- exec busybox-7b57f96db7-jtczk -- nslookup kubernetes.default.svc.cluster.local
E1101 08:46:51.902064    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/functional-146919/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/DeployApp (56.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 kubectl -- exec busybox-7b57f96db7-f5b49 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 kubectl -- exec busybox-7b57f96db7-f5b49 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 kubectl -- exec busybox-7b57f96db7-f7dc4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 kubectl -- exec busybox-7b57f96db7-f7dc4 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 kubectl -- exec busybox-7b57f96db7-jtczk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 kubectl -- exec busybox-7b57f96db7-jtczk -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (44.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 node add --alsologtostderr -v 5
E1101 08:47:02.143497    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/functional-146919/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:47:03.359930    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:47:22.625638    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/functional-146919/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-750582 node add --alsologtostderr -v 5: (44.279179956s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (44.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-750582 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (10.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 cp testdata/cp-test.txt ha-750582:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 ssh -n ha-750582 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 cp ha-750582:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3697991770/001/cp-test_ha-750582.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 ssh -n ha-750582 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 cp ha-750582:/home/docker/cp-test.txt ha-750582-m02:/home/docker/cp-test_ha-750582_ha-750582-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 ssh -n ha-750582 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 ssh -n ha-750582-m02 "sudo cat /home/docker/cp-test_ha-750582_ha-750582-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 cp ha-750582:/home/docker/cp-test.txt ha-750582-m03:/home/docker/cp-test_ha-750582_ha-750582-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 ssh -n ha-750582 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 ssh -n ha-750582-m03 "sudo cat /home/docker/cp-test_ha-750582_ha-750582-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 cp ha-750582:/home/docker/cp-test.txt ha-750582-m04:/home/docker/cp-test_ha-750582_ha-750582-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 ssh -n ha-750582 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 ssh -n ha-750582-m04 "sudo cat /home/docker/cp-test_ha-750582_ha-750582-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 cp testdata/cp-test.txt ha-750582-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 ssh -n ha-750582-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 cp ha-750582-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3697991770/001/cp-test_ha-750582-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 ssh -n ha-750582-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 cp ha-750582-m02:/home/docker/cp-test.txt ha-750582:/home/docker/cp-test_ha-750582-m02_ha-750582.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 ssh -n ha-750582-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 ssh -n ha-750582 "sudo cat /home/docker/cp-test_ha-750582-m02_ha-750582.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 cp ha-750582-m02:/home/docker/cp-test.txt ha-750582-m03:/home/docker/cp-test_ha-750582-m02_ha-750582-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 ssh -n ha-750582-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 ssh -n ha-750582-m03 "sudo cat /home/docker/cp-test_ha-750582-m02_ha-750582-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 cp ha-750582-m02:/home/docker/cp-test.txt ha-750582-m04:/home/docker/cp-test_ha-750582-m02_ha-750582-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 ssh -n ha-750582-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 ssh -n ha-750582-m04 "sudo cat /home/docker/cp-test_ha-750582-m02_ha-750582-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 cp testdata/cp-test.txt ha-750582-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 ssh -n ha-750582-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 cp ha-750582-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3697991770/001/cp-test_ha-750582-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 ssh -n ha-750582-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 cp ha-750582-m03:/home/docker/cp-test.txt ha-750582:/home/docker/cp-test_ha-750582-m03_ha-750582.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 ssh -n ha-750582-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 ssh -n ha-750582 "sudo cat /home/docker/cp-test_ha-750582-m03_ha-750582.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 cp ha-750582-m03:/home/docker/cp-test.txt ha-750582-m02:/home/docker/cp-test_ha-750582-m03_ha-750582-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 ssh -n ha-750582-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 ssh -n ha-750582-m02 "sudo cat /home/docker/cp-test_ha-750582-m03_ha-750582-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 cp ha-750582-m03:/home/docker/cp-test.txt ha-750582-m04:/home/docker/cp-test_ha-750582-m03_ha-750582-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 ssh -n ha-750582-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 ssh -n ha-750582-m04 "sudo cat /home/docker/cp-test_ha-750582-m03_ha-750582-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 cp testdata/cp-test.txt ha-750582-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 ssh -n ha-750582-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 cp ha-750582-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3697991770/001/cp-test_ha-750582-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 ssh -n ha-750582-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 cp ha-750582-m04:/home/docker/cp-test.txt ha-750582:/home/docker/cp-test_ha-750582-m04_ha-750582.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 ssh -n ha-750582-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 ssh -n ha-750582 "sudo cat /home/docker/cp-test_ha-750582-m04_ha-750582.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 cp ha-750582-m04:/home/docker/cp-test.txt ha-750582-m02:/home/docker/cp-test_ha-750582-m04_ha-750582-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 ssh -n ha-750582-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 ssh -n ha-750582-m02 "sudo cat /home/docker/cp-test_ha-750582-m04_ha-750582-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 cp ha-750582-m04:/home/docker/cp-test.txt ha-750582-m03:/home/docker/cp-test_ha-750582-m04_ha-750582-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 ssh -n ha-750582-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 ssh -n ha-750582-m03 "sudo cat /home/docker/cp-test_ha-750582-m04_ha-750582-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (10.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (78.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 node stop m02 --alsologtostderr -v 5
E1101 08:48:03.587022    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/functional-146919/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-750582 node stop m02 --alsologtostderr -v 5: (1m18.195979161s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-750582 status --alsologtostderr -v 5: exit status 7 (483.854273ms)

                                                
                                                
-- stdout --
	ha-750582
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-750582-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-750582-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-750582-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:49:07.996426   19998 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:49:07.996549   19998 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:49:07.996558   19998 out.go:374] Setting ErrFile to fd 2...
	I1101 08:49:07.996563   19998 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:49:07.996760   19998 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5912/.minikube/bin
	I1101 08:49:07.996946   19998 out.go:368] Setting JSON to false
	I1101 08:49:07.996983   19998 mustload.go:66] Loading cluster: ha-750582
	I1101 08:49:07.997056   19998 notify.go:221] Checking for updates...
	I1101 08:49:07.997385   19998 config.go:182] Loaded profile config "ha-750582": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:49:07.997400   19998 status.go:174] checking status of ha-750582 ...
	I1101 08:49:07.999451   19998 status.go:371] ha-750582 host status = "Running" (err=<nil>)
	I1101 08:49:07.999472   19998 host.go:66] Checking if "ha-750582" exists ...
	I1101 08:49:08.002064   19998 main.go:143] libmachine: domain ha-750582 has defined MAC address 52:54:00:9b:0b:67 in network mk-ha-750582
	I1101 08:49:08.002656   19998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:67", ip: ""} in network mk-ha-750582: {Iface:virbr1 ExpiryTime:2025-11-01 09:42:40 +0000 UTC Type:0 Mac:52:54:00:9b:0b:67 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-750582 Clientid:01:52:54:00:9b:0b:67}
	I1101 08:49:08.002681   19998 main.go:143] libmachine: domain ha-750582 has defined IP address 192.168.39.163 and MAC address 52:54:00:9b:0b:67 in network mk-ha-750582
	I1101 08:49:08.002862   19998 host.go:66] Checking if "ha-750582" exists ...
	I1101 08:49:08.003092   19998 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 08:49:08.005625   19998 main.go:143] libmachine: domain ha-750582 has defined MAC address 52:54:00:9b:0b:67 in network mk-ha-750582
	I1101 08:49:08.006029   19998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9b:0b:67", ip: ""} in network mk-ha-750582: {Iface:virbr1 ExpiryTime:2025-11-01 09:42:40 +0000 UTC Type:0 Mac:52:54:00:9b:0b:67 Iaid: IPaddr:192.168.39.163 Prefix:24 Hostname:ha-750582 Clientid:01:52:54:00:9b:0b:67}
	I1101 08:49:08.006055   19998 main.go:143] libmachine: domain ha-750582 has defined IP address 192.168.39.163 and MAC address 52:54:00:9b:0b:67 in network mk-ha-750582
	I1101 08:49:08.006213   19998 sshutil.go:53] new ssh client: &{IP:192.168.39.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/ha-750582/id_rsa Username:docker}
	I1101 08:49:08.087915   19998 ssh_runner.go:195] Run: systemctl --version
	I1101 08:49:08.094571   19998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 08:49:08.112593   19998 kubeconfig.go:125] found "ha-750582" server: "https://192.168.39.254:8443"
	I1101 08:49:08.112620   19998 api_server.go:166] Checking apiserver status ...
	I1101 08:49:08.112650   19998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 08:49:08.137991   19998 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1365/cgroup
	W1101 08:49:08.149490   19998 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1365/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1101 08:49:08.149533   19998 ssh_runner.go:195] Run: ls
	I1101 08:49:08.154900   19998 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1101 08:49:08.160729   19998 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1101 08:49:08.160759   19998 status.go:463] ha-750582 apiserver status = Running (err=<nil>)
	I1101 08:49:08.160772   19998 status.go:176] ha-750582 status: &{Name:ha-750582 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 08:49:08.160792   19998 status.go:174] checking status of ha-750582-m02 ...
	I1101 08:49:08.162392   19998 status.go:371] ha-750582-m02 host status = "Stopped" (err=<nil>)
	I1101 08:49:08.162413   19998 status.go:384] host is not running, skipping remaining checks
	I1101 08:49:08.162417   19998 status.go:176] ha-750582-m02 status: &{Name:ha-750582-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 08:49:08.162429   19998 status.go:174] checking status of ha-750582-m03 ...
	I1101 08:49:08.163595   19998 status.go:371] ha-750582-m03 host status = "Running" (err=<nil>)
	I1101 08:49:08.163611   19998 host.go:66] Checking if "ha-750582-m03" exists ...
	I1101 08:49:08.165674   19998 main.go:143] libmachine: domain ha-750582-m03 has defined MAC address 52:54:00:ac:9e:23 in network mk-ha-750582
	I1101 08:49:08.166013   19998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ac:9e:23", ip: ""} in network mk-ha-750582: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:47 +0000 UTC Type:0 Mac:52:54:00:ac:9e:23 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-750582-m03 Clientid:01:52:54:00:ac:9e:23}
	I1101 08:49:08.166038   19998 main.go:143] libmachine: domain ha-750582-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:ac:9e:23 in network mk-ha-750582
	I1101 08:49:08.166162   19998 host.go:66] Checking if "ha-750582-m03" exists ...
	I1101 08:49:08.166367   19998 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 08:49:08.168261   19998 main.go:143] libmachine: domain ha-750582-m03 has defined MAC address 52:54:00:ac:9e:23 in network mk-ha-750582
	I1101 08:49:08.168586   19998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ac:9e:23", ip: ""} in network mk-ha-750582: {Iface:virbr1 ExpiryTime:2025-11-01 09:44:47 +0000 UTC Type:0 Mac:52:54:00:ac:9e:23 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-750582-m03 Clientid:01:52:54:00:ac:9e:23}
	I1101 08:49:08.168608   19998 main.go:143] libmachine: domain ha-750582-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:ac:9e:23 in network mk-ha-750582
	I1101 08:49:08.168732   19998 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/ha-750582-m03/id_rsa Username:docker}
	I1101 08:49:08.259340   19998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 08:49:08.277235   19998 kubeconfig.go:125] found "ha-750582" server: "https://192.168.39.254:8443"
	I1101 08:49:08.277257   19998 api_server.go:166] Checking apiserver status ...
	I1101 08:49:08.277287   19998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 08:49:08.296308   19998 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1793/cgroup
	W1101 08:49:08.307376   19998 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1793/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1101 08:49:08.307458   19998 ssh_runner.go:195] Run: ls
	I1101 08:49:08.312721   19998 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1101 08:49:08.317367   19998 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1101 08:49:08.317386   19998 status.go:463] ha-750582-m03 apiserver status = Running (err=<nil>)
	I1101 08:49:08.317393   19998 status.go:176] ha-750582-m03 status: &{Name:ha-750582-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 08:49:08.317405   19998 status.go:174] checking status of ha-750582-m04 ...
	I1101 08:49:08.318892   19998 status.go:371] ha-750582-m04 host status = "Running" (err=<nil>)
	I1101 08:49:08.318910   19998 host.go:66] Checking if "ha-750582-m04" exists ...
	I1101 08:49:08.321244   19998 main.go:143] libmachine: domain ha-750582-m04 has defined MAC address 52:54:00:ef:ea:3b in network mk-ha-750582
	I1101 08:49:08.321648   19998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ef:ea:3b", ip: ""} in network mk-ha-750582: {Iface:virbr1 ExpiryTime:2025-11-01 09:47:09 +0000 UTC Type:0 Mac:52:54:00:ef:ea:3b Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-750582-m04 Clientid:01:52:54:00:ef:ea:3b}
	I1101 08:49:08.321674   19998 main.go:143] libmachine: domain ha-750582-m04 has defined IP address 192.168.39.96 and MAC address 52:54:00:ef:ea:3b in network mk-ha-750582
	I1101 08:49:08.321804   19998 host.go:66] Checking if "ha-750582-m04" exists ...
	I1101 08:49:08.322041   19998 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 08:49:08.324270   19998 main.go:143] libmachine: domain ha-750582-m04 has defined MAC address 52:54:00:ef:ea:3b in network mk-ha-750582
	I1101 08:49:08.324700   19998 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ef:ea:3b", ip: ""} in network mk-ha-750582: {Iface:virbr1 ExpiryTime:2025-11-01 09:47:09 +0000 UTC Type:0 Mac:52:54:00:ef:ea:3b Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:ha-750582-m04 Clientid:01:52:54:00:ef:ea:3b}
	I1101 08:49:08.324720   19998 main.go:143] libmachine: domain ha-750582-m04 has defined IP address 192.168.39.96 and MAC address 52:54:00:ef:ea:3b in network mk-ha-750582
	I1101 08:49:08.324889   19998 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/ha-750582-m04/id_rsa Username:docker}
	I1101 08:49:08.405248   19998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 08:49:08.422068   19998 status.go:176] ha-750582-m04 status: &{Name:ha-750582-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (78.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (37.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 node start m02 --alsologtostderr -v 5
E1101 08:49:25.509050    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/functional-146919/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-750582 node start m02 --alsologtostderr -v 5: (37.167003822s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (37.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (383.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 stop --alsologtostderr -v 5
E1101 08:51:41.647072    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/functional-146919/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:52:03.361392    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:52:09.351037    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/functional-146919/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:53:26.437502    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-750582 stop --alsologtostderr -v 5: (4m21.648573335s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 start --wait true --alsologtostderr -v 5
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-750582 start --wait true --alsologtostderr -v 5: (2m2.010112236s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (383.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-750582 node delete m03 --alsologtostderr -v 5: (18.020703195s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (18.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (241.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 stop --alsologtostderr -v 5
E1101 08:56:41.646931    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/functional-146919/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 08:57:03.360467    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-750582 stop --alsologtostderr -v 5: (4m1.582247239s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-750582 status --alsologtostderr -v 5: exit status 7 (61.111646ms)

                                                
                                                
-- stdout --
	ha-750582
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-750582-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-750582-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:00:32.191188   23299 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:00:32.191305   23299 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:00:32.191316   23299 out.go:374] Setting ErrFile to fd 2...
	I1101 09:00:32.191322   23299 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:00:32.191500   23299 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5912/.minikube/bin
	I1101 09:00:32.191660   23299 out.go:368] Setting JSON to false
	I1101 09:00:32.191691   23299 mustload.go:66] Loading cluster: ha-750582
	I1101 09:00:32.191804   23299 notify.go:221] Checking for updates...
	I1101 09:00:32.192044   23299 config.go:182] Loaded profile config "ha-750582": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:00:32.192057   23299 status.go:174] checking status of ha-750582 ...
	I1101 09:00:32.194253   23299 status.go:371] ha-750582 host status = "Stopped" (err=<nil>)
	I1101 09:00:32.194273   23299 status.go:384] host is not running, skipping remaining checks
	I1101 09:00:32.194280   23299 status.go:176] ha-750582 status: &{Name:ha-750582 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 09:00:32.194300   23299 status.go:174] checking status of ha-750582-m02 ...
	I1101 09:00:32.195498   23299 status.go:371] ha-750582-m02 host status = "Stopped" (err=<nil>)
	I1101 09:00:32.195513   23299 status.go:384] host is not running, skipping remaining checks
	I1101 09:00:32.195519   23299 status.go:176] ha-750582-m02 status: &{Name:ha-750582-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 09:00:32.195532   23299 status.go:174] checking status of ha-750582-m04 ...
	I1101 09:00:32.196730   23299 status.go:371] ha-750582-m04 host status = "Stopped" (err=<nil>)
	I1101 09:00:32.196742   23299 status.go:384] host is not running, skipping remaining checks
	I1101 09:00:32.196746   23299 status.go:176] ha-750582-m04 status: &{Name:ha-750582-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (241.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (101.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio
E1101 09:01:41.646855    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/functional-146919/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:02:03.360669    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-750582 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=crio: (1m40.904107471s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (101.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (78.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 node add --control-plane --alsologtostderr -v 5
E1101 09:03:04.712859    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/functional-146919/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-750582 node add --control-plane --alsologtostderr -v 5: (1m17.658141041s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-750582 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (78.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.68s)

                                                
                                    
x
+
TestJSONOutput/start/Command (54.92s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-377670 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-377670 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=crio: (54.921709864s)
--- PASS: TestJSONOutput/start/Command (54.92s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-377670 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-377670 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.07s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-377670 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-377670 --output=json --user=testUser: (7.074506375s)
--- PASS: TestJSONOutput/stop/Command (7.07s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-155757 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-155757 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (79.975071ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"dc0a92dc-2a9c-439b-a049-c6c5d5e6cda3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-155757] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7ed82f70-1387-4f9a-8b58-a309a68ad482","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21835"}}
	{"specversion":"1.0","id":"08cd2263-2558-462c-8e6e-ee4b03ab23ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b4b933a4-62bc-48f8-86ab-31a1e5785d46","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21835-5912/kubeconfig"}}
	{"specversion":"1.0","id":"8ac75f46-d69b-45c5-b691-d3900e559ffc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5912/.minikube"}}
	{"specversion":"1.0","id":"dc7dffb5-996e-4c9f-9402-b394f631d80a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e1c924ab-323c-4704-b6ce-97e0b8a09a39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"58df1b2d-d5bb-4c52-97c5-071645d2d8f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-155757" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-155757
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (81.66s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-122761 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-122761 --driver=kvm2  --container-runtime=crio: (38.517404592s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-126129 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-126129 --driver=kvm2  --container-runtime=crio: (40.586919238s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-122761
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-126129
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-126129" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-126129
helpers_test.go:175: Cleaning up "first-122761" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-122761
--- PASS: TestMinikubeProfile (81.66s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (22.29s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-796374 --memory=3072 --mount-string /tmp/TestMountStartserial2273840249/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-796374 --memory=3072 --mount-string /tmp/TestMountStartserial2273840249/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (21.291856174s)
--- PASS: TestMountStart/serial/StartWithMountFirst (22.29s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-796374 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-796374 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (22.28s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-813000 --memory=3072 --mount-string /tmp/TestMountStartserial2273840249/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1101 09:06:41.652756    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/functional-146919/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-813000 --memory=3072 --mount-string /tmp/TestMountStartserial2273840249/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (21.282727299s)
--- PASS: TestMountStart/serial/StartWithMountSecond (22.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-813000 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-813000 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-796374 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-813000 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-813000 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.30s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-813000
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-813000: (1.300515924s)
--- PASS: TestMountStart/serial/Stop (1.30s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (18.14s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-813000
E1101 09:07:03.361011    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-813000: (17.139933869s)
--- PASS: TestMountStart/serial/RestartStopped (18.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-813000 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-813000 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (100.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-033193 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-033193 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m40.072686533s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033193 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (100.39s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-033193 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-033193 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-033193 -- rollout status deployment/busybox: (4.934887515s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-033193 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-033193 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-033193 -- exec busybox-7b57f96db7-2xtbv -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-033193 -- exec busybox-7b57f96db7-7dbvw -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-033193 -- exec busybox-7b57f96db7-2xtbv -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-033193 -- exec busybox-7b57f96db7-7dbvw -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-033193 -- exec busybox-7b57f96db7-2xtbv -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-033193 -- exec busybox-7b57f96db7-7dbvw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.59s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-033193 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-033193 -- exec busybox-7b57f96db7-2xtbv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-033193 -- exec busybox-7b57f96db7-2xtbv -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-033193 -- exec busybox-7b57f96db7-7dbvw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-033193 -- exec busybox-7b57f96db7-7dbvw -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (43.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-033193 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-033193 -v=5 --alsologtostderr: (43.493887136s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033193 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (43.94s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-033193 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.46s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033193 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033193 cp testdata/cp-test.txt multinode-033193:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033193 ssh -n multinode-033193 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033193 cp multinode-033193:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile683835969/001/cp-test_multinode-033193.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033193 ssh -n multinode-033193 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033193 cp multinode-033193:/home/docker/cp-test.txt multinode-033193-m02:/home/docker/cp-test_multinode-033193_multinode-033193-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033193 ssh -n multinode-033193 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033193 ssh -n multinode-033193-m02 "sudo cat /home/docker/cp-test_multinode-033193_multinode-033193-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033193 cp multinode-033193:/home/docker/cp-test.txt multinode-033193-m03:/home/docker/cp-test_multinode-033193_multinode-033193-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033193 ssh -n multinode-033193 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033193 ssh -n multinode-033193-m03 "sudo cat /home/docker/cp-test_multinode-033193_multinode-033193-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033193 cp testdata/cp-test.txt multinode-033193-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033193 ssh -n multinode-033193-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033193 cp multinode-033193-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile683835969/001/cp-test_multinode-033193-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033193 ssh -n multinode-033193-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033193 cp multinode-033193-m02:/home/docker/cp-test.txt multinode-033193:/home/docker/cp-test_multinode-033193-m02_multinode-033193.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033193 ssh -n multinode-033193-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033193 ssh -n multinode-033193 "sudo cat /home/docker/cp-test_multinode-033193-m02_multinode-033193.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033193 cp multinode-033193-m02:/home/docker/cp-test.txt multinode-033193-m03:/home/docker/cp-test_multinode-033193-m02_multinode-033193-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033193 ssh -n multinode-033193-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033193 ssh -n multinode-033193-m03 "sudo cat /home/docker/cp-test_multinode-033193-m02_multinode-033193-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033193 cp testdata/cp-test.txt multinode-033193-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033193 ssh -n multinode-033193-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033193 cp multinode-033193-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile683835969/001/cp-test_multinode-033193-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033193 ssh -n multinode-033193-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033193 cp multinode-033193-m03:/home/docker/cp-test.txt multinode-033193:/home/docker/cp-test_multinode-033193-m03_multinode-033193.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033193 ssh -n multinode-033193-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033193 ssh -n multinode-033193 "sudo cat /home/docker/cp-test_multinode-033193-m03_multinode-033193.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033193 cp multinode-033193-m03:/home/docker/cp-test.txt multinode-033193-m02:/home/docker/cp-test_multinode-033193-m03_multinode-033193-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033193 ssh -n multinode-033193-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033193 ssh -n multinode-033193-m02 "sudo cat /home/docker/cp-test_multinode-033193-m03_multinode-033193-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.91s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033193 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-033193 node stop m03: (1.646790638s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033193 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-033193 status: exit status 7 (325.306733ms)

                                                
                                                
-- stdout --
	multinode-033193
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-033193-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-033193-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033193 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-033193 status --alsologtostderr: exit status 7 (315.917695ms)

                                                
                                                
-- stdout --
	multinode-033193
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-033193-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-033193-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:09:49.172234   28815 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:09:49.172475   28815 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:09:49.172484   28815 out.go:374] Setting ErrFile to fd 2...
	I1101 09:09:49.172488   28815 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:09:49.172696   28815 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5912/.minikube/bin
	I1101 09:09:49.172850   28815 out.go:368] Setting JSON to false
	I1101 09:09:49.172881   28815 mustload.go:66] Loading cluster: multinode-033193
	I1101 09:09:49.172968   28815 notify.go:221] Checking for updates...
	I1101 09:09:49.173243   28815 config.go:182] Loaded profile config "multinode-033193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:09:49.173257   28815 status.go:174] checking status of multinode-033193 ...
	I1101 09:09:49.175166   28815 status.go:371] multinode-033193 host status = "Running" (err=<nil>)
	I1101 09:09:49.175183   28815 host.go:66] Checking if "multinode-033193" exists ...
	I1101 09:09:49.177704   28815 main.go:143] libmachine: domain multinode-033193 has defined MAC address 52:54:00:7e:c2:e6 in network mk-multinode-033193
	I1101 09:09:49.178122   28815 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7e:c2:e6", ip: ""} in network mk-multinode-033193: {Iface:virbr1 ExpiryTime:2025-11-01 10:07:23 +0000 UTC Type:0 Mac:52:54:00:7e:c2:e6 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-033193 Clientid:01:52:54:00:7e:c2:e6}
	I1101 09:09:49.178148   28815 main.go:143] libmachine: domain multinode-033193 has defined IP address 192.168.39.194 and MAC address 52:54:00:7e:c2:e6 in network mk-multinode-033193
	I1101 09:09:49.178287   28815 host.go:66] Checking if "multinode-033193" exists ...
	I1101 09:09:49.178474   28815 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:09:49.180448   28815 main.go:143] libmachine: domain multinode-033193 has defined MAC address 52:54:00:7e:c2:e6 in network mk-multinode-033193
	I1101 09:09:49.180856   28815 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:7e:c2:e6", ip: ""} in network mk-multinode-033193: {Iface:virbr1 ExpiryTime:2025-11-01 10:07:23 +0000 UTC Type:0 Mac:52:54:00:7e:c2:e6 Iaid: IPaddr:192.168.39.194 Prefix:24 Hostname:multinode-033193 Clientid:01:52:54:00:7e:c2:e6}
	I1101 09:09:49.180908   28815 main.go:143] libmachine: domain multinode-033193 has defined IP address 192.168.39.194 and MAC address 52:54:00:7e:c2:e6 in network mk-multinode-033193
	I1101 09:09:49.181094   28815 sshutil.go:53] new ssh client: &{IP:192.168.39.194 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/multinode-033193/id_rsa Username:docker}
	I1101 09:09:49.263015   28815 ssh_runner.go:195] Run: systemctl --version
	I1101 09:09:49.269572   28815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:09:49.286649   28815 kubeconfig.go:125] found "multinode-033193" server: "https://192.168.39.194:8443"
	I1101 09:09:49.286685   28815 api_server.go:166] Checking apiserver status ...
	I1101 09:09:49.286728   28815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:09:49.306852   28815 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1346/cgroup
	W1101 09:09:49.318280   28815 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1346/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:09:49.318367   28815 ssh_runner.go:195] Run: ls
	I1101 09:09:49.323081   28815 api_server.go:253] Checking apiserver healthz at https://192.168.39.194:8443/healthz ...
	I1101 09:09:49.327585   28815 api_server.go:279] https://192.168.39.194:8443/healthz returned 200:
	ok
	I1101 09:09:49.327610   28815 status.go:463] multinode-033193 apiserver status = Running (err=<nil>)
	I1101 09:09:49.327626   28815 status.go:176] multinode-033193 status: &{Name:multinode-033193 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 09:09:49.327645   28815 status.go:174] checking status of multinode-033193-m02 ...
	I1101 09:09:49.329123   28815 status.go:371] multinode-033193-m02 host status = "Running" (err=<nil>)
	I1101 09:09:49.329143   28815 host.go:66] Checking if "multinode-033193-m02" exists ...
	I1101 09:09:49.331392   28815 main.go:143] libmachine: domain multinode-033193-m02 has defined MAC address 52:54:00:ce:fa:d2 in network mk-multinode-033193
	I1101 09:09:49.331790   28815 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ce:fa:d2", ip: ""} in network mk-multinode-033193: {Iface:virbr1 ExpiryTime:2025-11-01 10:08:21 +0000 UTC Type:0 Mac:52:54:00:ce:fa:d2 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:multinode-033193-m02 Clientid:01:52:54:00:ce:fa:d2}
	I1101 09:09:49.331820   28815 main.go:143] libmachine: domain multinode-033193-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:ce:fa:d2 in network mk-multinode-033193
	I1101 09:09:49.331964   28815 host.go:66] Checking if "multinode-033193-m02" exists ...
	I1101 09:09:49.332170   28815 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:09:49.334171   28815 main.go:143] libmachine: domain multinode-033193-m02 has defined MAC address 52:54:00:ce:fa:d2 in network mk-multinode-033193
	I1101 09:09:49.334554   28815 main.go:143] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:ce:fa:d2", ip: ""} in network mk-multinode-033193: {Iface:virbr1 ExpiryTime:2025-11-01 10:08:21 +0000 UTC Type:0 Mac:52:54:00:ce:fa:d2 Iaid: IPaddr:192.168.39.62 Prefix:24 Hostname:multinode-033193-m02 Clientid:01:52:54:00:ce:fa:d2}
	I1101 09:09:49.334580   28815 main.go:143] libmachine: domain multinode-033193-m02 has defined IP address 192.168.39.62 and MAC address 52:54:00:ce:fa:d2 in network mk-multinode-033193
	I1101 09:09:49.334725   28815 sshutil.go:53] new ssh client: &{IP:192.168.39.62 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/21835-5912/.minikube/machines/multinode-033193-m02/id_rsa Username:docker}
	I1101 09:09:49.413304   28815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:09:49.428587   28815 status.go:176] multinode-033193-m02 status: &{Name:multinode-033193-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1101 09:09:49.428626   28815 status.go:174] checking status of multinode-033193-m03 ...
	I1101 09:09:49.430403   28815 status.go:371] multinode-033193-m03 host status = "Stopped" (err=<nil>)
	I1101 09:09:49.430418   28815 status.go:384] host is not running, skipping remaining checks
	I1101 09:09:49.430423   28815 status.go:176] multinode-033193-m03 status: &{Name:multinode-033193-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (40.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033193 node start m03 -v=5 --alsologtostderr
E1101 09:10:06.439607    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-033193 node start m03 -v=5 --alsologtostderr: (40.311904496s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033193 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (40.80s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (298.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-033193
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-033193
E1101 09:11:41.653391    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/functional-146919/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:12:03.361201    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-033193: (2m57.266568836s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-033193 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-033193 --wait=true -v=5 --alsologtostderr: (2m1.514774673s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-033193
--- PASS: TestMultiNode/serial/RestartKeepsNodes (298.90s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033193 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-033193 node delete m03: (2.079594725s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033193 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.53s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (176.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033193 stop
E1101 09:16:41.645419    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/functional-146919/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:17:03.360510    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-033193 stop: (2m56.260022049s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033193 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-033193 status: exit status 7 (61.502808ms)

                                                
                                                
-- stdout --
	multinode-033193
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-033193-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033193 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-033193 status --alsologtostderr: exit status 7 (60.118127ms)

                                                
                                                
-- stdout --
	multinode-033193
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-033193-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:18:28.046295   31624 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:18:28.046395   31624 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:18:28.046407   31624 out.go:374] Setting ErrFile to fd 2...
	I1101 09:18:28.046413   31624 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:18:28.046597   31624 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5912/.minikube/bin
	I1101 09:18:28.046799   31624 out.go:368] Setting JSON to false
	I1101 09:18:28.046837   31624 mustload.go:66] Loading cluster: multinode-033193
	I1101 09:18:28.046931   31624 notify.go:221] Checking for updates...
	I1101 09:18:28.047333   31624 config.go:182] Loaded profile config "multinode-033193": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:18:28.047351   31624 status.go:174] checking status of multinode-033193 ...
	I1101 09:18:28.049503   31624 status.go:371] multinode-033193 host status = "Stopped" (err=<nil>)
	I1101 09:18:28.049516   31624 status.go:384] host is not running, skipping remaining checks
	I1101 09:18:28.049521   31624 status.go:176] multinode-033193 status: &{Name:multinode-033193 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 09:18:28.049536   31624 status.go:174] checking status of multinode-033193-m02 ...
	I1101 09:18:28.050589   31624 status.go:371] multinode-033193-m02 host status = "Stopped" (err=<nil>)
	I1101 09:18:28.050601   31624 status.go:384] host is not running, skipping remaining checks
	I1101 09:18:28.050605   31624 status.go:176] multinode-033193-m02 status: &{Name:multinode-033193-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (176.38s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (122.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-033193 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1101 09:19:44.714953    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/functional-146919/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-033193 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m2.143599753s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-033193 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (122.58s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (40.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-033193
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-033193-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-033193-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (72.828829ms)

                                                
                                                
-- stdout --
	* [multinode-033193-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21835
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21835-5912/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5912/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-033193-m02' is duplicated with machine name 'multinode-033193-m02' in profile 'multinode-033193'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-033193-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-033193-m03 --driver=kvm2  --container-runtime=crio: (38.958092908s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-033193
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-033193: exit status 80 (191.589115ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-033193 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-033193-m03 already exists in multinode-033193-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-033193-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (40.12s)

                                                
                                    
x
+
TestScheduledStopUnix (109.65s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-606229 --memory=3072 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-606229 --memory=3072 --driver=kvm2  --container-runtime=crio: (38.027377506s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-606229 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-606229 -n scheduled-stop-606229
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-606229 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1101 09:24:22.211306    9793 retry.go:31] will retry after 77.722µs: open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/scheduled-stop-606229/pid: no such file or directory
I1101 09:24:22.212493    9793 retry.go:31] will retry after 183.322µs: open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/scheduled-stop-606229/pid: no such file or directory
I1101 09:24:22.213623    9793 retry.go:31] will retry after 135.371µs: open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/scheduled-stop-606229/pid: no such file or directory
I1101 09:24:22.214775    9793 retry.go:31] will retry after 368.788µs: open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/scheduled-stop-606229/pid: no such file or directory
I1101 09:24:22.215930    9793 retry.go:31] will retry after 379.052µs: open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/scheduled-stop-606229/pid: no such file or directory
I1101 09:24:22.217085    9793 retry.go:31] will retry after 851.864µs: open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/scheduled-stop-606229/pid: no such file or directory
I1101 09:24:22.218254    9793 retry.go:31] will retry after 733.421µs: open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/scheduled-stop-606229/pid: no such file or directory
I1101 09:24:22.219404    9793 retry.go:31] will retry after 1.063594ms: open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/scheduled-stop-606229/pid: no such file or directory
I1101 09:24:22.220536    9793 retry.go:31] will retry after 2.637669ms: open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/scheduled-stop-606229/pid: no such file or directory
I1101 09:24:22.223777    9793 retry.go:31] will retry after 2.687925ms: open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/scheduled-stop-606229/pid: no such file or directory
I1101 09:24:22.226979    9793 retry.go:31] will retry after 7.635739ms: open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/scheduled-stop-606229/pid: no such file or directory
I1101 09:24:22.235244    9793 retry.go:31] will retry after 5.698389ms: open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/scheduled-stop-606229/pid: no such file or directory
I1101 09:24:22.241538    9793 retry.go:31] will retry after 7.221051ms: open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/scheduled-stop-606229/pid: no such file or directory
I1101 09:24:22.249825    9793 retry.go:31] will retry after 25.970746ms: open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/scheduled-stop-606229/pid: no such file or directory
I1101 09:24:22.276069    9793 retry.go:31] will retry after 35.478284ms: open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/scheduled-stop-606229/pid: no such file or directory
I1101 09:24:22.312384    9793 retry.go:31] will retry after 61.416287ms: open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/scheduled-stop-606229/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-606229 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-606229 -n scheduled-stop-606229
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-606229
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-606229 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-606229
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-606229: exit status 7 (59.339535ms)

                                                
                                                
-- stdout --
	scheduled-stop-606229
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-606229 -n scheduled-stop-606229
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-606229 -n scheduled-stop-606229: exit status 7 (57.07467ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-606229" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-606229
--- PASS: TestScheduledStopUnix (109.65s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (120.98s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1049590961 start -p running-upgrade-874785 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1049590961 start -p running-upgrade-874785 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (58.906467553s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-874785 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-874785 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m0.026859029s)
helpers_test.go:175: Cleaning up "running-upgrade-874785" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-874785
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-874785: (1.324656405s)
--- PASS: TestRunningBinaryUpgrade (120.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-709275 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-709275 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=crio: exit status 14 (93.385895ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-709275] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21835
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21835-5912/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5912/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (103.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-709275 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-709275 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m43.54137103s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-709275 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (103.83s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.46s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (122.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3696786455 start -p stopped-upgrade-775726 --memory=3072 --vm-driver=kvm2  --container-runtime=crio
E1101 09:26:41.645412    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/functional-146919/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:26:46.441592    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:27:03.359904    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3696786455 start -p stopped-upgrade-775726 --memory=3072 --vm-driver=kvm2  --container-runtime=crio: (1m20.550431263s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3696786455 -p stopped-upgrade-775726 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3696786455 -p stopped-upgrade-775726 stop: (1.577941573s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-775726 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-775726 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (40.404379179s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (122.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (45.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-709275 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-709275 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (44.277080286s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-709275 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-709275 status -o json: exit status 2 (193.408859ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-709275","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-709275
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (45.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (38.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-709275 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-709275 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (38.818841724s)
--- PASS: TestNoKubernetes/serial/Start (38.82s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.13s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-775726
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-775726: (1.12584858s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.13s)

                                                
                                    
x
+
TestPause/serial/Start (85.52s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-855890 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-855890 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m25.522026841s)
--- PASS: TestPause/serial/Start (85.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-709275 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-709275 "sudo systemctl is-active --quiet service kubelet": exit status 1 (163.554606ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-709275
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-709275: (1.231796757s)
--- PASS: TestNoKubernetes/serial/Stop (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (40.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-709275 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-709275 --driver=kvm2  --container-runtime=crio: (40.654685807s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (40.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-997526 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-997526 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (140.858716ms)

                                                
                                                
-- stdout --
	* [false-997526] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21835
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21835-5912/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5912/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:29:17.658199   38295 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:29:17.658352   38295 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:29:17.658363   38295 out.go:374] Setting ErrFile to fd 2...
	I1101 09:29:17.658367   38295 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:29:17.658580   38295 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21835-5912/.minikube/bin
	I1101 09:29:17.661477   38295 out.go:368] Setting JSON to false
	I1101 09:29:17.662799   38295 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":4305,"bootTime":1761985053,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:29:17.662873   38295 start.go:143] virtualization: kvm guest
	I1101 09:29:17.664816   38295 out.go:179] * [false-997526] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:29:17.666330   38295 notify.go:221] Checking for updates...
	I1101 09:29:17.666337   38295 out.go:179]   - MINIKUBE_LOCATION=21835
	I1101 09:29:17.667531   38295 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:29:17.668643   38295 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21835-5912/kubeconfig
	I1101 09:29:17.669941   38295 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21835-5912/.minikube
	I1101 09:29:17.671529   38295 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:29:17.672692   38295 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:29:17.675093   38295 config.go:182] Loaded profile config "NoKubernetes-709275": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1101 09:29:17.675282   38295 config.go:182] Loaded profile config "kubernetes-upgrade-133315": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:29:17.675418   38295 config.go:182] Loaded profile config "pause-855890": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:29:17.675531   38295 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:29:17.722799   38295 out.go:179] * Using the kvm2 driver based on user configuration
	I1101 09:29:17.724378   38295 start.go:309] selected driver: kvm2
	I1101 09:29:17.724397   38295 start.go:930] validating driver "kvm2" against <nil>
	I1101 09:29:17.724428   38295 start.go:941] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:29:17.726768   38295 out.go:203] 
	W1101 09:29:17.728233   38295 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1101 09:29:17.729630   38295 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-997526 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-997526

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-997526

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-997526

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-997526

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-997526

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-997526

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-997526

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-997526

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-997526

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-997526

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997526"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997526"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997526"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-997526

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997526"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997526"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-997526" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-997526" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-997526" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-997526" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-997526" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-997526" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-997526" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-997526" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997526"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997526"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997526"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997526"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997526"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-997526" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-997526" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-997526" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997526"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997526"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997526"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997526"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997526"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21835-5912/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 01 Nov 2025 09:27:28 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.77:8443
name: kubernetes-upgrade-133315
contexts:
- context:
cluster: kubernetes-upgrade-133315
extensions:
- extension:
last-update: Sat, 01 Nov 2025 09:27:28 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-133315
name: kubernetes-upgrade-133315
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-133315
user:
client-certificate: /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/kubernetes-upgrade-133315/client.crt
client-key: /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/kubernetes-upgrade-133315/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-997526

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997526"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997526"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997526"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997526"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997526"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997526"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997526"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997526"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997526"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997526"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997526"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997526"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997526"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997526"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997526"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997526"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997526"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-997526"

                                                
                                                
----------------------- debugLogs end: false-997526 [took: 3.432867239s] --------------------------------
helpers_test.go:175: Cleaning up "false-997526" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-997526
--- PASS: TestNetworkPlugins/group/false (3.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-709275 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-709275 "sudo systemctl is-active --quiet service kubelet": exit status 1 (189.342969ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestISOImage/Setup (21.57s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:46: (dbg) Run:  out/minikube-linux-amd64 start -p guest-649821 --no-kubernetes --driver=kvm2  --container-runtime=crio
iso_test.go:46: (dbg) Done: out/minikube-linux-amd64 start -p guest-649821 --no-kubernetes --driver=kvm2  --container-runtime=crio: (21.566627236s)
--- PASS: TestISOImage/Setup (21.57s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-649821 ssh "which crictl"
--- PASS: TestISOImage/Binaries/crictl (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-649821 ssh "which curl"
--- PASS: TestISOImage/Binaries/curl (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-649821 ssh "which docker"
--- PASS: TestISOImage/Binaries/docker (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-649821 ssh "which git"
--- PASS: TestISOImage/Binaries/git (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-649821 ssh "which iptables"
--- PASS: TestISOImage/Binaries/iptables (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-649821 ssh "which podman"
--- PASS: TestISOImage/Binaries/podman (0.20s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0.18s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-649821 ssh "which rsync"
--- PASS: TestISOImage/Binaries/rsync (0.18s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0.19s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-649821 ssh "which socat"
--- PASS: TestISOImage/Binaries/socat (0.19s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-649821 ssh "which wget"
--- PASS: TestISOImage/Binaries/wget (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0.17s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-649821 ssh "which VBoxControl"
--- PASS: TestISOImage/Binaries/VBoxControl (0.17s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0.2s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:75: (dbg) Run:  out/minikube-linux-amd64 -p guest-649821 ssh "which VBoxService"
--- PASS: TestISOImage/Binaries/VBoxService (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (96.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-281823 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-281823 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (1m36.620768895s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (96.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (103.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-198999 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1101 09:31:41.645531    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/functional-146919/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:32:03.359880    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-198999 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m43.77358145s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (103.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-281823 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [58f5ce6b-80c4-45dc-9cb7-7714a8071071] Pending
helpers_test.go:352: "busybox" [58f5ce6b-80c4-45dc-9cb7-7714a8071071] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [58f5ce6b-80c4-45dc-9cb7-7714a8071071] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004246567s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-281823 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-281823 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-281823 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.073336283s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-281823 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-281823 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-281823 --alsologtostderr -v=3: (1m24.000754414s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (84.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-198999 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [9d026249-c1f3-4c59-9b08-2abea6d30ac2] Pending
helpers_test.go:352: "busybox" [9d026249-c1f3-4c59-9b08-2abea6d30ac2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [9d026249-c1f3-4c59-9b08-2abea6d30ac2] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.005972998s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-198999 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-198999 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-198999 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.052467982s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-198999 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (85.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-198999 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-198999 --alsologtostderr -v=3: (1m25.860011735s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (85.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-281823 -n old-k8s-version-281823
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-281823 -n old-k8s-version-281823: exit status 7 (57.486348ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-281823 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (44.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-281823 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-281823 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.0: (44.41273846s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-281823 -n old-k8s-version-281823
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (44.69s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-198999 -n no-preload-198999
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-198999 -n no-preload-198999: exit status 7 (58.50718ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-198999 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (56.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-198999 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-198999 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (56.422058148s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-198999 -n no-preload-198999
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (56.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-pvjlz" [60cfbb95-2197-4a07-b81e-df5b511754bb] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-pvjlz" [60cfbb95-2197-4a07-b81e-df5b511754bb] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.005197919s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-pvjlz" [60cfbb95-2197-4a07-b81e-df5b511754bb] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006140308s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-281823 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-281823 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-281823 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-281823 -n old-k8s-version-281823
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-281823 -n old-k8s-version-281823: exit status 2 (240.21354ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-281823 -n old-k8s-version-281823
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-281823 -n old-k8s-version-281823: exit status 2 (222.20222ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-281823 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-281823 -n old-k8s-version-281823
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-281823 -n old-k8s-version-281823
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (54.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-274104 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-274104 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (54.811684335s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (54.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (96.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-294528 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-294528 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (1m36.819104464s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (96.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-798lw" [37259dd7-9fb4-4ab1-9fc9-8bc768d6a72e] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-798lw" [37259dd7-9fb4-4ab1-9fc9-8bc768d6a72e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.004507596s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-798lw" [37259dd7-9fb4-4ab1-9fc9-8bc768d6a72e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005198306s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-198999 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-198999 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-198999 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-198999 --alsologtostderr -v=1: (1.102582893s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-198999 -n no-preload-198999
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-198999 -n no-preload-198999: exit status 2 (253.730796ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-198999 -n no-preload-198999
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-198999 -n no-preload-198999: exit status 2 (251.040622ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-198999 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-198999 -n no-preload-198999
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-198999 -n no-preload-198999
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (46.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-404310 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-404310 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (46.51636692s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (46.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-274104 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [02e1b343-9920-4f3f-962a-9efcedc3362b] Pending
helpers_test.go:352: "busybox" [02e1b343-9920-4f3f-962a-9efcedc3362b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [02e1b343-9920-4f3f-962a-9efcedc3362b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.003632579s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-274104 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-274104 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-274104 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (87.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-274104 --alsologtostderr -v=3
E1101 09:36:24.716424    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/functional-146919/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:36:41.645377    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/functional-146919/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-274104 --alsologtostderr -v=3: (1m27.78424314s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (87.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-404310 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-404310 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.176887548s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-404310 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-404310 --alsologtostderr -v=3: (11.131938877s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-294528 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [94683ad2-dc8e-4938-8ff0-eb758cbe62ac] Pending
E1101 09:37:03.360701    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [94683ad2-dc8e-4938-8ff0-eb758cbe62ac] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [94683ad2-dc8e-4938-8ff0-eb758cbe62ac] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 12.004590941s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-294528 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-404310 -n newest-cni-404310
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-404310 -n newest-cni-404310: exit status 7 (59.519558ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-404310 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (32.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-404310 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-404310 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (31.932152892s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-404310 -n newest-cni-404310
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (32.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-294528 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-294528 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (87.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-294528 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-294528 --alsologtostderr -v=3: (1m27.710331117s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (87.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-404310 image list --format=json
E1101 09:37:37.218033    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/old-k8s-version-281823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:37:37.224537    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/old-k8s-version-281823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:37:37.235978    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/old-k8s-version-281823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:37:37.257427    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/old-k8s-version-281823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:37:37.298873    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/old-k8s-version-281823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-404310 --alsologtostderr -v=1
E1101 09:37:37.381115    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/old-k8s-version-281823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:37:37.543368    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/old-k8s-version-281823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:37:37.865443    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/old-k8s-version-281823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-404310 -n newest-cni-404310
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-404310 -n newest-cni-404310: exit status 2 (205.223177ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-404310 -n newest-cni-404310
E1101 09:37:38.507791    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/old-k8s-version-281823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-404310 -n newest-cni-404310: exit status 2 (207.410437ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-404310 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-404310 -n newest-cni-404310
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-404310 -n newest-cni-404310
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (90.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-997526 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
E1101 09:37:42.350873    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/old-k8s-version-281823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:37:47.473076    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/old-k8s-version-281823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-997526 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m30.620399505s)
--- PASS: TestNetworkPlugins/group/auto/Start (90.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-274104 -n embed-certs-274104
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-274104 -n embed-certs-274104: exit status 7 (66.551523ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-274104 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (55.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-274104 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
E1101 09:37:57.715245    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/old-k8s-version-281823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:38:11.470465    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/no-preload-198999/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:38:11.476959    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/no-preload-198999/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:38:11.488395    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/no-preload-198999/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:38:11.509908    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/no-preload-198999/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:38:11.551369    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/no-preload-198999/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:38:11.632896    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/no-preload-198999/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:38:11.794712    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/no-preload-198999/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:38:12.116506    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/no-preload-198999/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:38:12.758185    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/no-preload-198999/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:38:14.040115    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/no-preload-198999/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:38:16.602100    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/no-preload-198999/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:38:18.197182    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/old-k8s-version-281823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:38:21.726367    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/no-preload-198999/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:38:31.968566    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/no-preload-198999/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-274104 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (55.570719356s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-274104 -n embed-certs-274104
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (55.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-294528 -n default-k8s-diff-port-294528
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-294528 -n default-k8s-diff-port-294528: exit status 7 (68.521234ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-294528 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (44.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-294528 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-294528 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.34.1: (43.736826561s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-294528 -n default-k8s-diff-port-294528
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (44.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xsdq8" [93b1c327-3691-4684-bf5b-cb0a3b392e41] Running
E1101 09:38:52.450103    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/no-preload-198999/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003413663s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xsdq8" [93b1c327-3691-4684-bf5b-cb0a3b392e41] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003855466s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-274104 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-274104 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-274104 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-274104 -n embed-certs-274104
E1101 09:38:59.159021    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/old-k8s-version-281823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-274104 -n embed-certs-274104: exit status 2 (242.239417ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-274104 -n embed-certs-274104
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-274104 -n embed-certs-274104: exit status 2 (226.920881ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-274104 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-274104 -n embed-certs-274104
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-274104 -n embed-certs-274104
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-997526 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-997526 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m30.995382496s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (91.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-997526 "pgrep -a kubelet"
I1101 09:39:11.774410    9793 config.go:182] Loaded profile config "auto-997526": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-997526 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hn4pm" [f6a32a35-c0d6-4a08-8d46-60db1a6e8959] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hn4pm" [f6a32a35-c0d6-4a08-8d46-60db1a6e8959] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.005063084s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-997526 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-997526 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-997526 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-qks2x" [813b9c10-6056-448a-a74b-c47281977122] Running
E1101 09:39:33.412456    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/no-preload-198999/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005587367s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-qks2x" [813b9c10-6056-448a-a74b-c47281977122] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004879159s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-294528 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (72.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-997526 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-997526 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m12.411589854s)
--- PASS: TestNetworkPlugins/group/calico/Start (72.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-294528 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-294528 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-294528 -n default-k8s-diff-port-294528
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-294528 -n default-k8s-diff-port-294528: exit status 2 (233.404369ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-294528 -n default-k8s-diff-port-294528
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-294528 -n default-k8s-diff-port-294528: exit status 2 (243.126122ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-294528 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-294528 -n default-k8s-diff-port-294528
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-294528 -n default-k8s-diff-port-294528
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (86.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-997526 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E1101 09:40:21.081313    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/old-k8s-version-281823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-997526 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m26.58404592s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (86.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-mnx88" [f5f22e10-ad6b-4bd2-805b-cd28678571f6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005683802s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-997526 "pgrep -a kubelet"
I1101 09:40:39.570377    9793 config.go:182] Loaded profile config "kindnet-997526": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-997526 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7rxmd" [60bba8e1-5188-4c63-99eb-f7366c4e6468] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-7rxmd" [60bba8e1-5188-4c63-99eb-f7366c4e6468] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004336194s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-997526 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-997526 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-997526 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-mlcs9" [f3216ff8-8e19-4000-a7d5-ae29e13c5570] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-mlcs9" [f3216ff8-8e19-4000-a7d5-ae29e13c5570] Running
E1101 09:40:55.333941    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/no-preload-198999/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005262068s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-997526 "pgrep -a kubelet"
I1101 09:40:57.335652    9793 config.go:182] Loaded profile config "calico-997526": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-997526 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4xxgb" [e896b540-c616-413a-b107-cca220ec969f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4xxgb" [e896b540-c616-413a-b107-cca220ec969f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004106612s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (83.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-997526 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-997526 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m23.485968921s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (83.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-997526 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-997526 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-997526 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-997526 "pgrep -a kubelet"
I1101 09:41:10.756085    9793 config.go:182] Loaded profile config "custom-flannel-997526": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-997526 replace --force -f testdata/netcat-deployment.yaml
I1101 09:41:11.255665    9793 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1101 09:41:11.273541    9793 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6g8kb" [c19ac6ae-a69c-4369-83bd-6e0859fbc4e5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-6g8kb" [c19ac6ae-a69c-4369-83bd-6e0859fbc4e5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004428378s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-997526 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-997526 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-997526 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (74.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-997526 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-997526 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m14.567831609s)
--- PASS: TestNetworkPlugins/group/flannel/Start (74.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (94.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-997526 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E1101 09:41:41.645427    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/functional-146919/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-997526 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m34.645617989s)
--- PASS: TestNetworkPlugins/group/bridge/Start (94.65s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:96: (dbg) Run:  out/minikube-linux-amd64 -p guest-649821 ssh "df -t ext4 /data | grep /data"
--- PASS: TestISOImage/PersistentMounts//data (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0.19s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:96: (dbg) Run:  out/minikube-linux-amd64 -p guest-649821 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
E1101 09:42:03.276537    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/default-k8s-diff-port-294528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:42:03.358000    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/default-k8s-diff-port-294528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:42:03.360524    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/addons-468489/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/PersistentMounts//var/lib/docker (0.19s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:96: (dbg) Run:  out/minikube-linux-amd64 -p guest-649821 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
--- PASS: TestISOImage/PersistentMounts//var/lib/cni (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:96: (dbg) Run:  out/minikube-linux-amd64 -p guest-649821 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
E1101 09:42:03.195899    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/default-k8s-diff-port-294528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:42:03.202363    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/default-k8s-diff-port-294528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:42:03.213738    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/default-k8s-diff-port-294528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:42:03.235266    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/default-k8s-diff-port-294528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/PersistentMounts//var/lib/kubelet (0.17s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:96: (dbg) Run:  out/minikube-linux-amd64 -p guest-649821 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
--- PASS: TestISOImage/PersistentMounts//var/lib/minikube (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0.18s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:96: (dbg) Run:  out/minikube-linux-amd64 -p guest-649821 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
--- PASS: TestISOImage/PersistentMounts//var/lib/toolbox (0.18s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0.17s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:96: (dbg) Run:  out/minikube-linux-amd64 -p guest-649821 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
--- PASS: TestISOImage/PersistentMounts//var/lib/boot2docker (0.17s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0.17s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p guest-649821 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
E1101 09:42:03.520333    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/default-k8s-diff-port-294528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestISOImage/eBPFSupport (0.17s)
E1101 09:42:05.765381    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/default-k8s-diff-port-294528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:42:08.327583    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/default-k8s-diff-port-294528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:42:13.449060    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/default-k8s-diff-port-294528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:42:23.690542    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/default-k8s-diff-port-294528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-997526 "pgrep -a kubelet"
I1101 09:42:28.611500    9793 config.go:182] Loaded profile config "enable-default-cni-997526": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-997526 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mmxzq" [ff43bb95-0d0a-4009-89d0-493728c9eb97] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-mmxzq" [ff43bb95-0d0a-4009-89d0-493728c9eb97] Running
E1101 09:42:37.217279    9793 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/old-k8s-version-281823/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004586715s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-jnkmp" [98575cbb-d387-49f0-8b1d-2f84cbc83dce] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005367329s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-997526 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-997526 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-997526 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-997526 "pgrep -a kubelet"
I1101 09:42:45.869028    9793 config.go:182] Loaded profile config "flannel-997526": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-997526 replace --force -f testdata/netcat-deployment.yaml
I1101 09:42:46.107840    9793 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-wpbln" [bec99ebc-d6e1-406c-95c4-53f4f026581b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-wpbln" [bec99ebc-d6e1-406c-95c4-53f4f026581b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004489925s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-997526 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-997526 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-997526 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-997526 "pgrep -a kubelet"
I1101 09:43:14.207701    9793 config.go:182] Loaded profile config "bridge-997526": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-997526 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6m6bj" [e8c79a41-959d-4cdc-ae87-439da225d33b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-6m6bj" [e8c79a41-959d-4cdc-ae87-439da225d33b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004232801s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-997526 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-997526 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-997526 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    

Test skip (40/343)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.1/cached-images 0
15 TestDownloadOnly/v1.34.1/binaries 0
16 TestDownloadOnly/v1.34.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.3
33 TestAddons/serial/GCPAuth/RealCredentials 0
40 TestAddons/parallel/Olm 0
47 TestAddons/parallel/AmdGpuDevicePlugin 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
55 TestHyperKitDriverInstallOrUpdate 0
56 TestHyperkitDriverSkipUpgrade 0
107 TestFunctional/parallel/DockerEnv 0
108 TestFunctional/parallel/PodmanEnv 0
120 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
121 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
122 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
123 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
124 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
125 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
126 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
127 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
156 TestFunctionalNewestKubernetes 0
157 TestGvisorAddon 0
179 TestImageBuild 0
207 TestKicCustomNetwork 0
208 TestKicExistingNetwork 0
209 TestKicCustomSubnet 0
210 TestKicStaticIP 0
242 TestChangeNoneUser 0
245 TestScheduledStopWindows 0
247 TestSkaffold 0
249 TestInsufficientStorage 0
253 TestMissingContainerUpgrade 0
267 TestStartStop/group/disable-driver-mounts 0.17
277 TestNetworkPlugins/group/kubenet 3.82
285 TestNetworkPlugins/group/cilium 4.03
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.3s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-468489 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.30s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-311736" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-311736
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-997526 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-997526

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-997526

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-997526

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-997526

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-997526

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-997526

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-997526

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-997526

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-997526

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-997526

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997526"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997526"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997526"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-997526

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997526"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997526"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-997526" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-997526" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-997526" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-997526" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-997526" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-997526" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-997526" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-997526" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997526"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997526"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997526"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997526"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997526"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-997526" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-997526" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-997526" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997526"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997526"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997526"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997526"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997526"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21835-5912/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 01 Nov 2025 09:27:28 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.77:8443
name: kubernetes-upgrade-133315
contexts:
- context:
cluster: kubernetes-upgrade-133315
extensions:
- extension:
last-update: Sat, 01 Nov 2025 09:27:28 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-133315
name: kubernetes-upgrade-133315
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-133315
user:
client-certificate: /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/kubernetes-upgrade-133315/client.crt
client-key: /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/kubernetes-upgrade-133315/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-997526

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997526"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997526"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997526"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997526"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997526"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997526"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997526"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997526"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997526"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997526"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997526"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997526"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997526"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997526"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997526"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997526"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997526"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-997526"

                                                
                                                
----------------------- debugLogs end: kubenet-997526 [took: 3.648021452s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-997526" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-997526
--- SKIP: TestNetworkPlugins/group/kubenet (3.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-997526 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-997526

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-997526

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-997526

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-997526

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-997526

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-997526

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-997526

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-997526

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-997526

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-997526

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997526"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997526"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997526"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-997526

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997526"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997526"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-997526" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-997526" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-997526" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-997526" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-997526" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-997526" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-997526" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-997526" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997526"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997526"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997526"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997526"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997526"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-997526

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-997526

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-997526" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-997526" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-997526

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-997526

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-997526" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-997526" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-997526" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-997526" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-997526" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997526"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997526"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997526"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997526"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997526"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21835-5912/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 01 Nov 2025 09:27:28 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.39.77:8443
name: kubernetes-upgrade-133315
contexts:
- context:
cluster: kubernetes-upgrade-133315
extensions:
- extension:
last-update: Sat, 01 Nov 2025 09:27:28 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-133315
name: kubernetes-upgrade-133315
current-context: ""
kind: Config
users:
- name: kubernetes-upgrade-133315
user:
client-certificate: /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/kubernetes-upgrade-133315/client.crt
client-key: /home/jenkins/minikube-integration/21835-5912/.minikube/profiles/kubernetes-upgrade-133315/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-997526

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997526"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997526"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997526"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997526"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997526"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997526"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997526"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997526"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997526"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997526"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997526"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997526"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997526"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997526"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997526"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997526"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997526"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-997526" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-997526"

                                                
                                                
----------------------- debugLogs end: cilium-997526 [took: 3.858617625s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-997526" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-997526
--- SKIP: TestNetworkPlugins/group/cilium (4.03s)

                                                
                                    
Copied to clipboard